 Okay, ready? Okay, good afternoon. My name is Marcos Hernandez. I'm a staff engineer with the Networking and Security Business Unit. I support the NSX platform here at VMware. And I'm also part of the VMware Integrated OpenStack team here at VMware. So today we're here to talk about the new plugin that we have developed for Neutron and that integrates into VDS and NSX for vSphere. So that is the content that we're going to present today. The agenda is very simple. I figure that it would be a good idea to level set on NSX and make sure that everybody has at least a high level understanding of what NSX is. So we're going to cover some of that. And then we'll jump right into a technical discussion, the technical deep dive on how Neutron is leveraging NSX vSphere through this new plugin. That is, by the way, offered as part of our VMware Integrated OpenStack product but can be used by any OpenStack distribution as it has been upstream. And it's a fully open source. Okay, before I get started, anyone here know what NSX is or has been exposed? That's pretty good. That's more than 50% of the room. So we're not going to spend too much time there defining or restating the value prop of NSX. But for those of you who do not know what NSX is, NSX is a network virtualization solution that offers in hypervisor or on top of hypervisor networking services. And the idea behind NSX is that we have promoted to software the networking services that are required for workloads that were typically offered in hardware. So what that means is that if you need a Layer 2 through Layer 7 services for your applications to actually do something useful for your users in the traditional provisioning and model, this would require changes and configuration and some sort of orchestration in the physical switches and the physical routers, your load balancers, et cetera. So with NSX, we promote all these network services to software and we call this virtual networks. Now the services can be instantiated via API calls. So the full networking stack can be automated. So you can create VMs, create applications, and as you do that, you can also create the networking services that are associated with that application. So you can do that in a single or several API calls. So we have turned the network into an API target. So the idea there is automation and provisioning speed. So NSX still requires a very solid, a very robust underlying hardware. VMware, we don't sell physical switches so we still need a physical network to run on top of. The vendor that provides that physical network is irrelevant to the operations of NSX and the topology of that physical network is also irrelevant to the operations of NSX. What we need is IP connectivity for all the hypervisors that are included in the NSX domain but the network has to be reliable. And the joke which you've probably heard a million times if you've heard my pitch before is that if the network, the physical network is crap, the virtual networks in NSX are just v-crap. So you have to have a reliable network for NSX to deliver on the promise of automation and reliability that we are promoting here. Then the other aspect of NSX is that we are, in a sense, we're a software-defined network solution although we don't necessarily like to identify ourselves as such. And the reason is that we don't necessarily touch physical switches. There are a couple of exceptions but generally our action point is the hypervisor. That is really where we're focusing all of this effort. But we have, just like some other SDN solutions, we have externalized the data, I'm sorry, the control plane and the management plane. So we do offer an external control and management plane that is also what you will point your cloud management platform to and this is the place that you use to enter the network and interact with the NSX services. It is highly available, highly scalable. The control plane is based on the NYSERA technology that we acquired 18 months ago and we have now adapted this control and management plane to work in a VCR environment. So when we talk about the neutron interactions, I want you to remember that we are talking about integrating with this control and management plane, that we're not necessarily talking to the hypervisors directly and I'll explain later why this is important. So automation is the name of the game and our control and management plane offers that. I think it's important to double click on one of the components in NSX, which is our Edge Services Gateway. As you will see a little later, we use this ESG or VM gateway in multiple places throughout our integration, the technology of the plugin that we have developed for neutron. So it's important to define what this gateway is and what it is not. This is a router in a VM that offers a good number of advanced network services, but not all the services are visible to OpenStack. For example, the example that I always present is our ESG supports dynamic routing protocols, supports OSPF, it supports BGP, but if you're integrating with OpenStack because that support doesn't exist in neutron, your OpenStack based cloud can really, all those features are really invisible. So it's just a little reminder that because of a platform claims to support features X, Y, or Z, at the end of the day, the least common denominator becomes the consumption layer and what your consumption layer can actually instantiate and consume. So that is very important and the last thing that you want to do is create automation with a cloud management platform like OpenStack and then go out of band to your infrastructure and start enabling services that are not visible to your cloud infrastructure. So this is a router in a VM that supports all these services and as these services are added to OpenStack, the good news is that NSX will be ready to support them and turn them on. So as we add load balancing as a service, VPN as a service, dynamic routing protocols when that finally comes, they're underlying infrastructure that you're providing when NSX will already support that, so it's just a matter of enabling that particular service. Okay? Okay, so that's a very, very quick introduction to NSX and what it is. We'll revisit some of these concepts as we go through this presentation. If you were in the previous session with Dan Wenland, you probably saw this slide. This is just a reminder that even though this plugin was first offered with VIO, VMware Integrated OpenStack, a product that we announced a couple months ago, our participation in the open source community is nothing new. We have been very active since 2012 when Dan and the NYSERA team started the Quantum Project, which is now Neutron. We've been very active and we are consistently one of the top 10 contributors to Neutron and other projects in OpenStack. So just a reminder that we have a lot of experience here and in the room, right now there are some teammates that are actively writing code for Neutron and supporting the plugin. So if you want to talk about what's happening there, we have all that expertise in the room today. Okay? Okay, so what is this VMware Neutron plugin? It's a production grade plugin that has been developed by us and that is fully upstream and open source. It is offered as an out-of-the-box option that is automatically configured using a wizard-based approach with VMware Integrated OpenStack, our own distribution. But because it is open source, it can be used by any open-stack distribution. In fact, we're in conversations with our partners that are open-stack vendors and they are testing and integrating this plugin as part of their own packages and their own offerings. So we are not really in the business of selling open-stack. What we really want is to help you leverage the best capabilities of our infrastructure on the vSphere and NSX side if open-stack is the strategy that you decide on for your private cloud implementation. So we're going to double-click on this a little later, but the Neutron, the collection of Neutron services that leverage the NSX infrastructure communicate using the plugin communicate to NSX Manager. NSX Manager is the management plan for NSX and it is also the API entry point into NSX. So when you use our plugin in your open-stack distribution or when you use it with VMware Integrated OpenStack, you're not pointing to all the hypervisors to instantiate and deliver networking services. You're just pointing to NSX Manager as a single entry point into the system. And then from there, NSX Manager goes around and orchestrates all the different network services. So why NSX? There's another option for configuring our plugin, but you're going to see that we are going to keep coming back to NSX as a preferred method. In fact, I think I have run NSX here just to remind you that even though we have VDS as an option, you should run NSX with this plugin. So if you look at the left column here, these are the attributes of NSX as a network virtualization solution. This is the reason why we want you to use NSX and what you probably want to use NSX. You want more agility. You want mobility for your applications. You want security, multi-tenancy, and simplified operations. If you compare that with the goals and the purpose of what OpenStack is trying to accomplish, it's the exact same thing. So we believe that we're fully aligned with the OpenStack strategy for adding infrastructure as a service in your private cloud, and NSX we believe is the best choice to do that when you do OpenStack and when you have this infrastructure and investment already in place. And that is the last slide for my architecture pitch. So let's get right into the technical deep dive. So what you see in this green or gray box is OpenStack and the various projects. And there are some missing, right? But at a high level, that is what OpenStack is. So that is your OpenStack control plane in a way. Just a little reminder that this plugin is not the only hook that we have into the VMware products or the VMware platforms. If we look at what is happening here with vCenter, we also offer a Nova plugin for vCenter. So when you create an instance in OpenStack that actually triggers the instantiation of a VM in vSphere, and this is all done by vCenter, or instigated by vCenter. Also we have a couple of drivers for Cinder and Glance that also talk to vCenter to create block storage and image catalog services, respectively. So it is not just Neutron. The focus of our session today is the network interactions between Neutron and NSX and how that happens and what are the services and elements that we use. But just a reminder that there are other plugins that you can use and then you can leverage. Okay? It is also important for all this to work as advertised, it is also important to satisfy and meet some very basic design requirements. We want you to configure the infrastructure in a way that will guarantee success. Okay? So we have this notion, and if you've seen NSX before, you will see that this concept mapped to this description here. We have this notion of functional clusters. So we have a management cluster, an edge cluster, and then we have all our capacity clusters or compute clusters. Management clusters are... It's a vSphere cluster. It's an aggregation of ESXi hosts, configuring a vSphere cluster that will host the OpenStack control plane, right? Or the NSX components, if you're not using VIO, right? We can also hold your vCenter and your date to operational tools, etc. So we want you to define and declare a separate management cluster from your capacity compute cluster. Okay? We have the edge cluster which can be combined with a management cluster. That management cluster that I talked about can double as also as an edge cluster. And in this edge cluster or combined management edge cluster, you're going to put all your OpenStack routers. And we'll tell you in a moment what NSX elements we're leveraging for that. But this edge cluster is where you put all your OpenStack routers. It's always a good idea to have your routers in a dedicated cluster like this, because that will allow you, especially if you're talking about a leaf spine architecture, where you typically reuse VLANs across the different leaves. It will allow you to normalize your compute clusters and the one that is kind of odd and unique because you're trunking external VLANs to communicate with your external users is the edge cluster. So that is the only one that is subject to modifications. Everything else can be standardized. So that is why we recommend, as part of the NSX best practices and also in the plug-in configuration, to have dedicated capacity and resources for your edge services. And like I said earlier, management and edge, if you're dealing with a small environment, can actually be combining a single cluster. If you've seen the NSX technology or if you've been exposed to it, you will know that we have this notion of a transport zone. A transport zone in NSX, and we only support one right now in the current iteration of the plug-in, this picture illustrates what a transport zone is. So you have your compute clusters from one through N here on the left. You have your edge cluster, which like I said will host all your open stack routers. And we have the management cluster where you have the VC infrastructure, the open stack control plane, so a transport zone is a ten entities, an ocean in NSX that defines the diameter of my logical overlays. When I create a network in Neutron, if you're using NSX, one of the options is to create a VX LAN that can be used anywhere in the transport zone by VMs that are anywhere in the transport zone to connect on the same broadcast domain. So that is what a transport zone is. It defines the diameter. So you include clusters in the transport zone, and in doing so you're saying, okay, VMs that are instantiated within these clusters will be able to share the same broadcast domain properties if I connect them to the same VX LAN. So right now in this version of the plug-in, also VIO, we support only one transport zone. And this is an NSX thing. Currently, it's a soft limit, but it's one that should be honored. You can only have 256 hosts in a transport zone. So by consequence, this first iteration of the plug-in, if you're using just one V center and one NSX domain, we can only scale right now to 250 hypervisors. However many VMs you can cram in there. So for most customers, this is more than enough. We're talking about mega scale. This is obviously a very, very small footprint. But it's the first iteration of the plug-in and we're looking to scale that. Okay, another prerequisite, whether you're working in VDS mode or NSX mode for the plug-in, is we need the presence of a virtual distributed switch. How many of you know what a VDS is? VDS, okay. Like probably 50% again. Okay, VDS is a software switch that lives inside ASXI. And the traditional application for VDS or any other V-Sphere software switch has been to connect VMs to VLANs. The VDS offers additional networking services and it's actually the foundation for NSX. So to run NSX, you need a VDS, a virtual distributed switch. So VDS is kind of a gateway drug to NSX or something like that. So we need you to... Am I being recorded? Okay. Okay. Scratch that. But anyway, so VDS is a requirement for the plug-in. You need a virtual distributed switch. And it is distributed because... No, it applies to every hypervisor, but the management of that switch is centralized from the center. And there are two configurations that we support today for that VDS. The one, the easy one, is you create a single VDS and you include all your clusters, management, edge, compute. So basically when you create a port group, which is what we call this layer two, that map to VLANs or VXLAN, when you create a port group for this VDS, it will be seen by all the hosts that are in those clusters, because they're sharing the same VDS. Very simple. There's another option where you create a compute VDS that strives across your compute infrastructure and then a separate VDS for management and edge. Right? So those are the two options today. We're looking to support and add more options in the future. Okay? If you have leaf spine, if you have different configurations, maybe a single VDS is not adequate and you need to break them down by functional cluster, et cetera, et cetera. Okay? So one of the requirements is for management and edge to be on the same VDS. And when I show you how we implement metadata services, you will see why. Okay? So these are the prerequisites. Management, edge, compute clusters, single transfer zone, and then virtual distributed switches and you have two options to pick from or to choose from. Okay. So let's revisit some of the basic workflows that are available in Neutron and the corresponding service that is created in NSX. So when you go in Neutron and you do, let's say that you want to create a multi-tier application with web and app and have some routing services and some security services. You go to Neutron, you do NeutronNetCreate or you go to Horizon, create a network. Let's say this is your web tier. When you do that, when you create that Neutron network, you instantiate what we call a logical switch. And this is a VXLAN that exists within the confines of that transfer zone. And then any VM that exists in that transfer zone can connect to that VXLAN. So you go to OpenStack and you launch two VMs and you connect the NIC of those VMs to the network and you have that topology there. If you create another network in Neutron, that will create another VNI, another VXLAN, another logical switch in NSX. This is a Neutron terminology and then the NSX taxonomy. And then you can create additional VMs and put them on that logical switch on that app network. Right now you have two isolated networks. If you enable the HCP for these networks, which you can do from Neutron as well, NSX and the plugin will instantiate one of those edge services gateways that we defined at the beginning and will enable at the HCP scope for these networks. So Neutron, via the plugin, is actually leveraging an NSX component and not the native Neutron the HCP agent that you get with the reference implementation. This is one example of where we're changing or replacing a component of the reference implementation with a more robust component that can actually scale scale better and offer high availability and scalability. Okay? So at the HCP if you enable the HCP in one or more networks, this will leverage one of the NSX edge services gateways. Then if you map what you do during the instance creation, of course, but if you map a VM to a security group in Neutron, this actually leverages and this is perhaps the most valuable reason why you should consider NSX as a replacement or as an alternative to your network layer in your OpenSec Cloud. When you create security groups in Neutron, we're actually using or creating distributed firewall rules in NSX. The distributed firewall is an internal, stateful firewall that is enforced and for security on every hypervisor. So before a packet actually touches the network, the firewall has to make a decision whether to allow it or deny it on the network. So hence the notion of distributed. And it is a stateful firewall. It is not OVS-ACLs, it's nothing like that. It's a stateful firewall inside ESXi that is protecting your VMs using micro segmentation. So security groups map to distributed firewall rules. And finally if you want external connectivity, you're going to need a router, a Neutron router. And that router is created as another ESG, as another NSX service gateway by the plugin. Right? So that router is put on the external network, it gets a floating IP that addresses that external interface and you can also use additional floating IP to create the net rules for outside to inside access. Okay, so these are the basic workflows that are supported in Neutron. There are more options that we're going to go through that. And this is a corresponding element in NSX that is used every time. Okay? Okay, let's take a step back and talk about the options to installing and deploying the plugin, this Neutron plugin for NSX. There's an option, there's the NSX option, and that is the one where we're focusing today. But there's also a VDS mode. And as you can see here in this table, there's a huge difference between the two. Right? So if you want to test the waters, kind of understand how our OpenStack own OpenStack distribution works, you're probably going to want to test it with VDS mode, but just know that the limitations are I mean, there are a lot of limitations. You can essentially just create provider networks and services, that's it. I mean, nothing else, right? And that's done on purpose, right? Because we really believe that the scalability and the enterprise-grade support that you're going to want for your private cloud implementation necessitates something like NSX. And in NSX, you get the full gamut of features. Everything is supported, right? From, you know, not just creating networks that are VLAN-backed or VXLAN-backed, but also layer three services, okay? So again, that's why I'm wearing this, run NSX. So, okay, so if we look at the topology supported in this model, I think I'll show you a picture. I think it'll be better, but I have a summary here of what we can do when you use NSX and we use this VCR, the NSX for VCR plugin. Okay? So option one is you can create multi-tier networks with the ATP services, right? And this networks, this green network or orange network can be VLAN-backed or VXLAN-backed, okay? So you can have VLAN-backed networks also known as provider networks fully instantiated and supported by the plugin if you're not ready operationally for overlays, right? Or you can have VXLAN-backed networks. Topologies are fully supported. You can have the ATP services provided by our Edge Services Gateway for either one of these options and implementations, right? Again, if you have centralized routing services, you create a neutron router, you connect your networks, you get NAT, source NAT by default, right? You can connect to the external network which we call provider space. And we also support NAT topologies, right? Which is table stakes with neutron. You know that in neutron you can disable NAT in this type of applications and, you know, basically have a routable IP address space in your tenant environment, right? So we support that as well. Okay. So that's option number one. Option number two which is here on the right shows the presses of a distributed router. A distributed router is a notion in neutron and it's also supported by NSX that creates optimized traffic flows for east-west communications, right? So if you have a WebVM that wants to talk to a database and they're in the same hypervisor, you don't have to go through the neutron router for that purpose, right? It's a lot more efficient just to have them communicate inside the hypervisor without actually hopping on the network. But from the tenant perspective, you still have a routing hop. If you sit on that WebVM and you do a trace route, you're going to see a routing hop, TTL minus one, all of that will still be there. But it's just that the routing path is optimized. So we support that. We support that in NSX. It's called distributed router and you can create a two-tier topology with a distributed router and a centralized router to have optimized east-west traffic engineering and north-south connectivity, okay? So that is fully supported there. There's a little bit of a caveat and I will talk about it in the next slide. In this model, we only support, I mean, we cannot enforce this. You can do it with VLANs but it is highly recommended that you use distributed routing with VXLAN back networks only. That is where you get scalability. A single distributed router in NSX can give you a thousand layer three interfaces. So, you know, a lot more than a traditional VM approach or what you can do with the layer three, neutral layer three engine, et cetera, right? Okay? And in both cases, we only support static routing because that's the only thing that neutral supports today. Okay? So, just a word of caution here. When you create a distributed topology that uses both a distributed router and a centralized router from a NSX perspective, these are going to be seen as two different routers, right? But neutron, we do some little bit of gymnastics in the back end to represent these two routers as a single one. Okay? So, from a neutron perspective, the distributed router and the centralized routers are just one router, just single UUID that describes it. But in NSX, we're actually using two different routing functions and these are two independent routing entities. So, what that means is that if you have a very curious tenant who starts trace routing, traffic from the inside to the outside, that tenant is going to see two routing hops between tenant space and provider space, right? So, he's probably going to wonder, hey, why do I have two routing hops if I only have one single neutron router that I provision? So, it's just a way we implement it right now. Okay? And it's just a quirk, it's just a cosmetic effect. Okay? Okay, I think it's important to talk a little bit of dynamic routing because as I talk to customers more and more, there's a need for this in the enterprise. In the traditional not traditional, but in the original way in which OpenStack treated or neutron treated tenant networks, NAT was mandatory, right? The use case was overlapping IPs, I give the exact same network address space to my developers and I just reuse that over and over. But as we adapt OpenStack to the enterprise, we're finding that overlapping IPs is really not a very popular use case. Anyone here disagrees with that? Okay. Great. I'm right. So, thank you. So, and the reason for that is that, you know, in the enterprise, right, unless the use case calls for something else, in the enterprise, VMs are long lived kind of, you know, they had VMs, if you will, right? And they're typically known by name, by IP and they can, they there are requirements to access that VM from anywhere to patch it, monitor it, you name it, right? So, typically those VMs will sit on routable IP address space. Therefore, these routers, you know, cannot do NAT, they just need to route. But then that creates a problem of a very dynamic and fluid network environment, which is exactly what NSX provides, right? The ability to create a bunch of networks, destroy a bunch of networks and do all that. If you're doing routable IP address space, you're going to need to tell your, the rest of your infrastructure what those networks are, right? And you're going to have to create routing to send traffic to a right place as these networks get created. So, there are multiple ways to solve for this in the absence of that dynamic routing protocol, right? One of the ways is you summarize, aggregate and just do a default route from your provider pointing to the next hub here. But the true solution will come when Neutron finally adds dynamic routing support, right? And we heard today if you were in the Kilo update session I think it's not it's got, I don't know if you know, but it's probably going to be it's not going to be Kilo essentially. That's what I understood from the session this morning. So, but, you know, just know that NSX has that functionality and we're just waiting for Neutron to be able to see it and use it, okay? But if you there's one way, one thing that is not perfect, but NSX can help you accomplish, right? And that is you create a set of NSX gateways outside of OpenStack enabling, you enable dynamic routing between those gateways and your physical infrastructure, right? And then you use the NSX API to inject static routes pointing to tenant space every time a new network is created. So the benefit there is that you don't have to if you have like a old Cisco router or whatever that doesn't have an API instead of like going into a CLI having to configure static routes on physical infrastructure you just make a script that calls NSX and we inject the static route here programmatically, right? You can actually automate that. So there are ways to, you know, use NSX to help you create orchestration. In this case it wouldn't happen inside OpenStack, it happens outside of OpenStack, but it shouldn't break anything, okay? Okay, I only have five minutes so I'm going to speed up a little bit. So in NSX whether you're using a distributed router or a centralized router we require a VM, right? It's just a fact, right? For centralized routing services is that it serves as gateway that I define at the very beginning. For the distributed router, even though this is hypervisor to hypervisor in kernel optimized routing today we require a service VM that can be used with dynamic routing protocols to establish the adjacencies in OSPF or BGP, whatever. We're looking to change that by the way in the next version of NSX that service VM won't be needed in static routing, right? So it's just an optimization there, but today those VMs are needed. So it is important that in that edge cluster where you're going to put all your OpenStack routers you preheat the cluster with pre-configure or pre-provision NSX routers. Why? Because you want instantaneous satisfaction when a tenant goes and says, I want a router. You don't want to have to wait for the entire process of deploying a VM without that router. So what we've done is we pre-populate the edge cluster and you can configure how many routers you put in that cluster based on your concurrency and your performance targets. But you can pre-populate that cluster with a bunch of routers and then when in OpenStack you provision a router in NSX all we're doing is reconfiguring an existing router. So that creates that illusion of instantaneous satisfaction. Like I said, we're looking for ways to do that and have more elegant implementations but based on the implementation today it's a requirement pretty much. Unless you don't mind waiting two minutes to get the router that he wants or she wants. We also use the Edge Services Gateway for the ATP services. Like I said, this is one of the elements that we replace from standard neutron implementation. If you have API NIPs because these Edge Services are routers at the end of the day and because we don't support VRFs today, you have to provision another the ATP router. So that's just the way it is. One of the enhancements that we're working on is to add VRF support. We're also working on a bare metal gateway that will use DPDK and there are some multiple work streams that are looking to optimize all this. But today with our VM approach to virtualizing the ATP services and centralized routing services if you have overlapping IPs your only option is to spin up another router and we have intelligence in the plugin that we will evacuate networks and we'll spin up new routers based on the number of networks that you connect to it. It's pretty sophisticated and robust. The point being we use a gateway for the ATP services. Okay, security groups and I talked about this already security groups in OpenStack map to distributed firewall rules in NSX. This is a very, very powerful feature. We have at least two customers that are looking to go into production with our own OpenStack distribution and NSX that are just using security groups. That is the killer app for NSX. They put the VM some VLANs and they use NSX to bring the benefits of microsegmentation to the VM and all the other things that go along with that like redirection to introspection services, etc. So that is an option for you. If you're not ready for overlays, if the networking team has some apprehension about how to operationalize overlays based networks you can just give them their VLANs that they love and still use things like microsegmentation, advanced service redirection, all that by means of the distributed firewall. We also have a feature called Spoof Guard that can be used. I'm just going to bypass this. It's a very important feature and it gives you the ability to basically black hole traffic in the network for a suspecting VM. We're using it for some other reasons but it is an NSX feature that we heavily, heavily leverage as part of our plugin integration, sorry. Okay, so finally to wrap it up, just a word on date to management and operations. This is not directly related to the plugin. These are actually tools and capabilities that can be used by any OpenStack implementation. But just a reminder that it's not just about being able to provision VMs. It's what do I do with all this after I have OpenStack? How do I troubleshoot it? How do I characterize performance? How do I dimension for growth? So we have instrumentation native to our portfolio that can help you do that. One is our CM tool called Realize Log Insight that have 56 predefined dashboards that can help you troubleshoot and characterize performance in an OpenStack cloud. And if you operate an OpenStack cloud you will know that doing that without some form of syslog aggregation it's impossible. It's just a non-starter. So that is Realize Log Insight. Then on the analytics side we have Realize Operations which is an analytics engine that can help you simulate capacity understand how growth rates and also troubleshoot and detect alerts and gaps security gaps and configuration gaps in your infrastructure. And we're monitoring not just the infrastructure, we're monitoring processes inside your OpenStack control plane like all the different Nova services, etc. And we're reporting that to an aggregation dashboard that will show you topologies, will show you the layout of your OpenStack tenants and the performance and utilization of each one. And finally there's one that I didn't include but it can also help you with the metering and all the showback, chargeback and shameback of an OpenStack cloud without having to resort to like a kilometer or something like that and that is called Realize Business. So the whole point here and I will conclude my session with that is that we offer if you're invested in vSphere as your virtualization layer, if you're looking to implement a software-defined network solution with NSX, this plugin can help you integrate and get to that enterprise-grade cloud and all these other satellite services can actually let you operate this cloud. Okay, so thank you so much for your time. I'm going to take a minute. I have time for questions. Any questions? Are we sharing the slides? Yes, we are. Yes, absolutely. I actually have it in my backup I think and NSX today, the question is can you talk about IPv6 support? We're also subject to the gaps in Neutron. Where IPv6 is and it's not supported I should say in OpenStack. But in NSX IPv6 support is also fragmented. So we don't have, we have for example IPv6 routing we support that but we don't support DHCPv6 we don't support IPv6 in our distributed router. So there's a matrix of support and obviously that's going to be important, right? In the current implementation of the plugin, we decided to basically call it what it is and we said no IPv6 support in the 1.0. Yes, question. Hi, I have a quick question. Thanks Marcus, this was really interesting. This is a question about your slide 9 or 10 where you have different blocks of OpenStack talking to your vSphere block to the Neutron. So for anybody who is planning to take OpenStack to production sooner or later they're going to come up with, come to the terms with the fact that you want line rate speed and probably leverage your existing networking gear. So if I want to do 40 gig, why doesn't VMware ecosystem allow the vCenter block to connect through the Neutron plugin and use, say, Cisco Nexus 7K underneath? Yeah. I think I understand the question is why does the plugin talk to NSX manager and not just my physical switch? Yes, that's one part but what if you didn't mandate me to use NSX but you still supported having vCenter talk to your OpenStack cloud through your Neutron server plugin? And then from there just go to the physical switches. I think that's a very interesting question but the answer is would be we'd try that by the way. We'd try that. Well, that's what Martin initially wanted. So I installed OpenFlow in all the switches and let any control plane manage all the switches and then all these vendors said no. To try to get to that I mean it's difficult so where you're going to see is monolithic solutions like ours that are just trying to optimize the interactions with our own products. Having said that there's a program in NSX to support the top of rack management of the top of rack topologies and we're going to announce support with Arista, HP, Dell, Cumulus, Burkait, I mean very, very soon. So that could be a way to get to there. It is in your roadmap. Yeah, it's actually committed and it's coming like in a matter of weeks. Okay, thank you so much again for your time. I'm being asked to end it. Thank you.