 Good afternoon everyone. I hope you all are having a good day and have had a good week here at the OpenStack Summit this week. Thank you all for joining us this afternoon. I don't know, is this the last session of the summit or is there one more after this one? Okay, so there's me and one other person in between you and freedom. So I will do my best to make sure that we give you useful information and that way you're not inclined to go ahead and leave. How many people are leaving to go home internationally perhaps tonight? Anybody flying home? A few here and there? Okay. Well we wish you safe travels and thank you for joining us. My name is Scott Lowe. I work for VMware in the Network and Security Business Unit. And that is the team that is responsible for creating VMware NSX. And we're going to take a few minutes this afternoon and talk about deploying NSX with OpenStack. Before we get started, let's, there we go. Make sure my thing works. Before we get started, just a couple of things. Feel free to ask questions throughout the session. You're more than welcome if I say something and you don't understand and you'd like some additional information or some additional clarification. Please feel free to ask a question. The session is being recorded. So ideally we'd love for you to get up and go to the mic and ask your question. But if you can get my attention past these very bright lights, then I'll repeat the question so that the recording gets it and everybody else in the room can hear it as well. But I may need you to speak up if you don't go to the microphone. You're more than welcome to take pictures or post tweets or updates or whatever. We do ask that you try to minimize whatever noise your mobile device makes. So turn the volume down on the little picture taking sound so that other people don't get interrupted during the session and turn off ringtones and that sort of thing. Just out of courtesy to everyone who's here. Okay, so a quick look at who I am and why I'm up here and then we'll take a look at what we're going to talk about. So I am a longtime veteran in the IT industry. I've been working in the IT field for over 20 years. I am a blogger. I write a lot on my own website, blog.scotload.org and you've probably visited there at some point. I've been writing for about 10 years on that site. I have had the opportunity to publish a number of books and so you may be familiar with Mastering VMware vSphere. That was a book that I wrote last year. I was very honored to be able to participate in a book sprint when we produced, along with a number of other individuals, we produced for the OpenStack community the OpenStack Architecture Design Guide, which is available in PDF form off the OpenStack.org website or you can read it online in HTML form on the OpenStack website as well. My current focus at VMware is on open source projects. I do a lot of work with the Open vSwitch project, open virtual network, Docker, OpenStack naturally. I'm here so open source, cloud computing, networking and virtualization and just to show that I am truly a geek, I do run OpenStack at home. Anybody else out there running OpenStack at home? A few, okay. Yeah, so you know how it is. My kids are asking me to spin up Minecraft servers for them so I tell them to go log into the OpenStack cloud and spin up their Minecraft server. All right, our agenda. I'll provide for those of you that aren't familiar with NSX, first a quick overview of VMware NSX. Very high level but if you are interested in more detail then you can see me after the session and I'll be happy to talk with you and answer any questions you may have and go over any of the elements in more detail but I don't want to bore everyone. We'll start with a high level view. Then I want to look at some of the means or the channels that you can use to deploy NSX in your OpenStack environment. I'll provide some examples of some customer deployments using these various channels, right? I was asked to remove customer-specific information so I have to be very kind of generic about the customers but I'll give you what information I can share and then perhaps the most exciting part at least for me and hopefully for you as well. I'm going to talk about some of the future things that we're going to be adding in a future release of OpenStack and I've been told that this is one of the first times publicly that we'll be talking about these features that we're planning on adding to OpenStack so you guys are among the very first to publicly hear about some of these features that we have planned and I have a section for Q&A or questions and answers here at the end but as I mentioned earlier you are more than welcome to ask questions at any time don't worry about interrupting me you are more than welcome to just stick your hand up or get my attention in some fashion and ask your question. So let's start with a quick overview of VMware NSX as a way by uprace to hand how many people are in here familiar with the NSX architecture? I'm going to block the light so I can see. Okay, a fair number of you, okay. So we still have a pretty fair number of folks who are not familiar with it and therefore I think this will be helpful. So what is VMware NSX? It is a network virtualization solution we leverage a network overlay so what we are doing is we are focused on creating a solution whereby you can create logical networks that are decoupled from the underlying physical network fabric using a network overlay this overlay mechanism and the software and orchestration that is wrapped around that as part of NSX allows you then to create and put into effect distributed L2 and distributed L3 across all the hypervisors in the system so you will get distributed layer 2 switching and distributed layer 3 routing and that functionality is distributed across all the hypervisors that are involved in OpenStack so if you have 500 nodes in your OpenStack deployment then every single one of those 500 nodes will be performing local L2 switching and local L3 routing and it will all be centrally controlled and centrally orchestrated by NSX the part that does that orchestration and that control is a scale out control plane so we have a scale out controller cluster that performs the functions of understanding where your instances in the OpenStack cloud are running on what hypervisor they were turned up and how to provide connectivity to those instances that information down using control plane protocols to the hypervisors and the hypervisors are then responsible for actually doing the work. It's very important to note although I don't have it noted here on the screen but just as a point of reference here that the control plane is completely out of the data plane there is a total isolation and separation between the control plane NSX and the data plane and hopefully that makes sense and as an example of doing that in some cases when you have a central controller or a central controller cluster the hypervisors end up having to forward traffic to the controller to understand what needs to be done with it and so some architectures incur additional latency when they receive a packet they don't know what to do with that packet they end up sending it to the controller the controller has to process it and then send it back NSX has a complete isolation between the data plane and the control plane so the hypervisors will never forward traffic to the controllers we use a proactive method of programming the hypervisors so by the time a packet reaches the hypervisor from a VM the rules have already been inserted into the hypervisor to tell it how to direct that traffic and so we don't ever punt traffic to the controllers which gives us an improved latency versus solutions that may need to punt to the controller NSX is supported in multi-hypervisor environments and I'll show you an architecture in a minute that shows it running on Linux hypervisors when you are running NSX in a pure vSphere environment and perhaps you're using VIO and there have been a couple sessions prior to mine on VIO this afternoon so I won't spend a lot of time on VIO I'll mention it briefly but I won't go into detail because the previous speakers have already done that for me but let's say you're running VIO and you're running NSX that would be a completely vSphere environment there are additional network services that we can offer in that pure vSphere environment when it's running with NSX that we today can't offer outside of that environment so one of them is for example a fully distributed and fully stateful firewall where the enforcement of the firewall rules occurs at the vNIC layer on every hypervisor so just like we distribute all the L2 switching and we distribute all the L3 routing across all the hypervisors in the system when you're running a pure vSphere environment underneath OpenStack we can offer fully distributed firewall functionality that is also stateful so L2, L3, L4 firewalling also logical load balancing and when using conjunction with VIO you can consume that through the Neutron LBAS project and also logical VPN it's not yet integrated into the VPN as a service efforts from Neutron but that is something that we are actively exploring so here's sort of a graphical view of what this looks like this is a really high level view and you know I wouldn't take it as a technical sort of deep dive architecture this is more of a logical representation of the pieces that come into play but you can see and I wonder does this thing have a blazer pointer you can see some of the key elements of the architecture that I reviewed already for example when we're operating in an OpenStack environment we operate as a core plug-in into Neutron and that core plug-in by the way is fully upstream so there's nothing proprietary about that plug-in it is developed fully upstream with the rest of the Neutron plug-ins available fully upstream runs through all of the existing OpenStack gating and infrastructure so when we're running NSX in conjunction with OpenStack the interface would be the Neutron plug-in it would be talking API where you see I have mentioned API access from the Cloud Management Platform that's where the Neutron plug-in is going to be communicating down taking the requests that are coming into Neutron which you're submitting via the Neutron command line client or some other API client or Horizon, whatever mechanism that might be it's taking those requests it's feeding it down to the controller cluster the controller cluster and also some management plane elements that aren't represented here but also are involved we'll then take that information pass that down to the hypervisors in this particular case I'm showing open source hypervisors we're running OpenVSwitch which is a you're probably all familiar with an open source project Apache licensed that supports programmable networking on Linux hypervisors and so we will interface with OVS and pass down the information OVS will then use the encapsulation protocol the overlay protocol there's a number of different protocols that are supported STT, VXLand, GRE and also not listed here but also a new one called Genev or Genev which we are developing in conjunction with Red Hat and Microsoft and Intel but those are the encapsulation protocols that are used in the data plane that's what allows us to decouple the logical networks from the underlying physical fabric so when you build your physical fabric you'll build it for scale so you'll do full L3 and you know possibly a spine leaf type of arrangement with your L2-L3 boundary at the top of rack and L2 only in the rack and all that kind of stuff all the sort of things that network engineers really love to be able to design but often are prevented from doing so because they have to maintain some sort of weird adjacency or connectivity and then the logical adjacencies the logical networks are built on top of that scalable L3 fabric we also offer these logical to physical gateways so where your logical networks which are encapsulated running one of these encapsulation protocols need to talk to the outside world because we all have workloads that need to communicate with other workloads outside of our OpenStack cloud I don't think any of us is fortunate enough to have everything running inside OpenStack just yet then we have these gateways and I'll come back and talk about them more in a moment but their responsibility is providing both layer 3 routed access in and out of the logical networks as well as layer 2 bridged access in and out of the logical networks and the APIs for the layer 2 gateways if you've been sitting in any of the neutron related sessions this week you know that the neutron project hasn't finished formalizing the APIs for the layer 2 stuff so that is implemented as an API extension supported with NSX so we offer that functionality but the full neutron project hasn't formalized how those layer 2 gateways are actually implemented yet but we offer layer 2 connectivity so this is really cool because it would allow you to bridge a neutron network with a VLAN outside on your physical network so they would be the same broadcast domain they would share the same IP address space it would give you a way to do IP mobility so that you could take a workload and migrate that or something of that nature into your OpenStack cloud and maintain its IP address and its connectivity but we also do layer 3 routed access which is far more common so using neutron logical routers and NAT or floating IPs we would be able to provide access to that as well alright so that's kind of a quick overview of the architecture in terms of kind of adoption and where we are right now OpenVswitch as I mentioned is a key part of the NSX architecture so this is something that an OpenSource project that NYSERA launched back in 2009 and it has since grown and seen a lot of growth it is a project that VMware still heavily invests in and we recognize that it is the foundation for a lot of the work that's going on in the network space the network virtualization space so pretty much almost all of the network virtualization solutions out there on Linux today in some form or fashion will leverage OBS they may not leverage the OBS Datapath they may leverage all of the OBS user space or they may be leveraging both the kernel Datapath as well as the user space the kernel Datapath is upstream as part of the Linux kernel it has been since version 3.3 of the Linux kernel and we continue to work closely with the Linux kernel community to push features into the Linux kernel to support the broader Linux networking community as a whole so OpenVswitch is important we continue to invest there and we have at last count or at least when this slide was produced we had 60 different organizations that were contributing code to OpenVswitch that included contributors from Red Hat Cisco, VMware, Citrix and others and the core committers to OpenVswitch earlier I think it was last year a gentleman from Red Hat became a core committer Thomas Graff is a core committer for OpenVswitch and then just this week they added Russell Bryant who also works at Red Hat Thomas may actually be at Cisco if I'm not mistaken I forget but either way so we've got some core committers that don't work for VMware which kind of again just reinforces the fact that we are doing this in conjunction with the community it's not something that we're off you're kind of doing on our own right 20% of our production deployments of NSX currently run under OpenStack and some of these production deployments and these are some of the customers who have publicly allowed us to refer to them as NSX customers these are some of these customers who are using NSX in conjunction with OpenStack and this bottom number just shows that we have some very very sizable deployments of NSX and OpenStack and in one particular case we have over 100,000 KVM VMs that are running in a single NSX deployment so it is scalable it is seeing traction in the OpenStack community and it is being developed in the open in conjunction with the community so let's talk about how we can deploy NSX with OpenStack what are the ways that you as an OpenStack administrator or operator or someone who is implementing OpenStack in your environment how can you deploy OpenStack in your environment what are the sort of the channels or the ways that we can go about doing that so VMware supports a lot of different ways of consuming NSX just as you have a lot of choice in consuming how you a lot of choice in how you consume OpenStack there is also a lot of choice in how you can choose to deploy NSX with OpenStack and so as a matter of example you know there are this spectrum of ways to deploy OpenStack in your environment you may choose to go with sort of the pre-packaged pre-configured route VIO is a good example of that you lose some flexibility because some choices are already made for you but in return you get some ease of deployment ease of maintenance ease of upgrades along that route and VIO is not the only one there are other companies out there that offer this sort of rolled up pre-done for you OpenStack piece you can also use the distribution packages that are available maybe you are going to go with a partner or an apprentice for example or you are going to choose one of the packages that is offered for you through a Linux distribution or Red Hat, a SUSE, a canonical something of that nature this may give you a little more flexibility in terms of how you deploy it and what you deploy it and some of the configuration settings but you still sort of have a vendor that you can go back to for example if you say hey I want to deploy OpenStack and I want to use canonical to help me with that then you can purchase support options and know that you have somebody you can go back to if there is an issue and then finally there is sort of the do-it-yourself I want to set up my own CICD infrastructure I want to run off of trunk or close to trunk I am going to do it that way I am not going to use distribution packages I am not going to go through a partner I am going to do it myself along the same lines we have different ways that you can consume NSX you can choose to consume NSX through something like VIO I will just mention it where it fits in here but you have already had a couple different sessions here talking about VIO in great detail you can also say hey I want to go back and I want to look at some of the partnerships that VMware has announced and formed with others within the OpenStack community and I want to deploy NSX in conjunction with one of those partners or finally you can take it the do-it-yourself route and all three of these routes are perfectly fine we don't you know let's look at these in a little more detail first what about using VIO this as I mentioned earlier sort of falls on the pre-packaged pre-configured category there is code already present to help with the integration of NSX I don't know if the VIO guys showed you that in one of the previous two sessions but when you go to deploy VIO there is a section that allows you to say here is the IP address the manager for NSX and off you go and it kind of does everything for you and it walks through its setup scripts it deploys all the pieces that need to be there it configures all the pieces that need to be there and they all talk to each other and off you go and that makes VIO a really great choice if you are an existing vSphere customer and if you have an existing vSphere infrastructure and you are interested in deploying OpenStack I would encourage you really look at it as a great option because of the fact that this is a pre-packaged offering and we are doing all the integration work that means VM work can provide end to end support for the entire piece so the hypervisor, the storage platforms the networking, NSX, OpenStack all the way through you get some nice features that I am sure the VIO folks have talked about really easy upgrades really easy deployments really easy upgrades including a rollback feature VIO 2.0 which allows you to deploy the new version and if you don't like that you can rollback to the previous version which is really really nice and because we are running in a pure vSphere environment as I mentioned earlier there are some things that we can do with NSX when it's only vSphere that we aren't yet able to do in multi hypervisor environment so this allows you to take advantage of for example the fully stateful firewalling that's available with vSphere or logical load balancing that's integrated into NSX so that's one route to go down if you are currently thinking of deploying OpenStack I firmly believe that you need a strong network virtualization solution to go with OpenStack and NSX obviously we'd love to have you guys choose NSX naturally if you are an existing vSphere customer I think going down the VIO route is a really great way to get started very very rapidly but there are other options as well and we fully support those so over the last couple of years since we have really been involving ourselves in much greater depth with the OpenStack community we have announced relationships with a lot of different partners in the OpenStack space so partners like Canonical where they will package up and include support for vSphere and NSX in their packages and then their support is coordinated with VMware so if you buy a support package from Canonical then they coordinate with VMware support to help you resolve issues right so that's one route for example we've talked about partnerships with Mirantis and their distribution and how again their distribution will include support for vSphere and NSX and back end support coordination HP is another example we are in discussions with a number of other partners right now and all of those are customer driven so as we have customers come to us and say hey we'd like to work with you in conjunction with let's say the company that's in the line or SUSE then we want to work with those partners and continue to expand our relationships so that that's another avenue for you to deploy OpenStack and NSX some of these partners may provide some advanced technical integrations some of them may just be that we're going to package it up and we're going to offer you support and then like I mentioned there's a coordination on the back end so if you have a support package with one of these partners say a Mirantis or Canonical for example we'll call them for support and then they will coordinate on the back end to help with any resolution of any NSX specific issues finally there is the do-it-yourself route right we have a number of very very large customers who choose to go this route and that's fine if you have the existing expertise and or the staff to support this sort of method most of these are non vSphere customers so these are customers who are running very very large Zen or KVM installations these customers have the internal expertise they build OpenStack themselves they might be deploying packages from a distribution but more often than not these guys are setting up their own CICD they may actually be active OpenStack contributors and therefore they have their own CICD infrastructure they are deploying patches into their own production environment as well submitting them upstream so they are very very deeply involved in Linux and in OpenStack very very deeply involved in how this works so they do it that way and we support that method as well in that particular case we don't support the full OpenStack piece because they're building it they're supporting it but we will provide support for NSX and the integration of NSX with OpenStack so we may be submitting patches on the customers behalf up into Neutron for example where we uncover problems in Neutron as a result of the implementation so that's another route so you know the whole range of options are open to you as you are looking at deploying OpenStack if you want to go the route of a pre-packaged appliance a pre-configured piece like VIO that's fine we can do that if you want to partner with somebody either from a professional services or intellectual property or packaging route, morantis, canonical on HP, platform 9, something like that then we work with those partners as well and if you have a sizable enough implementation and or staff and you want to go the full do-it-yourself route that's fine we support that as well so we want to support customer choice in how you consume OpenStack and how you integrate NSX with OpenStack so let me talk just briefly about a few examples of some customer deployments and again I did have to couldn't name these customers by name necessarily but we do have a very very large retailer that is going down sort of the pre-packaged route so they've chosen to use VIO and underneath VIO they're running vSphere and then they're running NSX they're using this VIO vSphere NSX environment to run their web presence in production so this is a very well-known brand name that is running their entire web presence on VIO and vSphere and NSX they're currently running around 5000 VMs on this installation all managed by OpenStack, VIO all running on vSphere and the networking and security all provided by NSX they anticipate growing obviously they're seeing a lot of success in their business so we'll probably see this grow by about I'd say 20% over the next year or so that's kind of what they're anticipating we have a couple of different financial institutions that are working with us these guys are at different stages of deployment so one of them is very very early in a POC but is already committed to a particular partner we're working closely with Morantis on this opportunity and so they're using Morantis OpenStack and be leveraging NSX with that so they're a little earlier in their deployment we have some other larger financial institutions that are farther along but also partnering a couple different opportunities there with HP with their Helion Group and some others and then as an example of the do-it-yourself route we have a couple of different options there but we have one service provider this is a full DIY installation they are in full production this is NSX with Zen server running underneath OpenStack so it's a very very large non vSphere environment and they're leveraging a whole set of tools open source tools in conjunction with their Linux hypervisors and NSX to provide services to their customers so this is some examples across a variety of industry verticals and examples of the different sorts of ways that customers are choosing to consume OpenStack and NSX in conjunction with each other alright so before I move forward with a look at some of the future features we're going to be adding are there any questions that you guys have from me yes I'm sorry I said that again multi hypervisor environment right so that's fine yeah don't worry about that so the question was around multi hypervisor was there a specific question I mean I've been talking at both points about how we fully support KVM and Zen I give you examples of customers who have deployed that we also have customers who are running only vSphere I don't not immediately running customers who are running a mixed hypervisor environment we run an internal cloud that runs both vSphere and KVM last time I checked we were somewhere around 300 hypervisors both KVM and vSphere running under NSX and managed by OpenStack but the operational challenges of running a mixed hypervisor environment often customers will choose not to go down the route maintaining multiple images for the hypervisors maintaining metadata and glance to know which image goes to which hypervisor and stuff creates a lot of operational challenges but the idea of running a mixed hypervisor environment is something that you can certainly do with NSX does that help? Great other questions before we move on just going to shield there so I can see if anybody's hand is up okay great all right again if you do have any questions or if you want any additional information if I'm able to provide additional information I'll be happy to do that just catch me afterwards and I also have my contact information on the last slide you're more than welcome to email me or hit me on Twitter whatever works for you okay so let's talk about some of the things that we have planned for a future release of NSX now everything that I'm saying here is committed to a future release but we haven't provided any dates on when this future release is going to hit right so you can't go back to your boss and say but Scott promised that it would be here at this time because I'm not promising you that okay we anticipate availability for this release early next year but again you know there's the whole sort of you know lots of things may change between now and then right but these are the features that we are currently projecting to be able to deliver in that future release I'm going to list them first and then we'll talk in more detail about each of the features so the first thing up what we're going to do is we're going to we're going to work on improving our multi-hypervisor support and I'll talk about that in a little more detail in just a moment it's not really sort of I've talked about it some here it's not really a secret that you know there are some things we can do when we have a pure vSphere environment and NSX that we can't do when we start putting into our other hypervisors right I talked about the distributed firewall for example and so one of the things that we are really shooting for in this upcoming release is improving that multi-hypervisor support and bringing additional parity to what we're able to do to non vSphere environment that's one and I'll talk more about that in just a moment we are also going to be addressing some issues in scaling our management plane our control plane is already a scale out control plane so we use some advanced clustering mechanisms and a consensus algorithm if anybody's interested it's the Paxos consensus algorithm you can go look that up on Wikipedia and you know to blow your mind on how it works but that is our control cluster our central control cluster which is already scale out but we didn't have a sort of scale out management cluster and so one of the things we're aiming to provide on in this upcoming release will actually be a scale out management cluster so that will allow you to scale the control plane and the management plane and be able to scale independently of one another right so that if you are seeing lots of API requests scale out the management plane without having to also scale up the control plane or vice versa so that's really cool we're going to be improving our logical to physical connectivity I'll talk more about that in a moment so we're adding some additional options for improved performance on the logical to physical connectivity and then also we're going to be adding dynamic routing support for neutron logical routers so this will mean that you can create neutron logical routers and have those neutron logical routers exchange dynamic routing protocol information with your physical routers so that you can have networks in OpenStack that are essentially routable networks and are visible in the routing tables if that's what you wish to do right you can do this today with static routes so we already offer the ability today in NSX to say here's a here's a neutron logical network and I want this logical network to be passed out to the other network and I don't want to perform NAT so in other words I want to take this address space that is assigned to this logical network and I want to just represent it out on the rest of the network I know I'm a geek but I actually do this in one of the logical networks in my own OpenStack lab so I just have a subnet off of my overall routing table and that is a neutron logical network and then what I have to do is program a route to say that this subnet this route is available through this neutron logical router right what this will allow us to do is actually use BGP to establish peering relationships between neutron logical routers or some sort of entity it may not be the logical router itself but a BGP speaker on behalf of the router since routers may come and go and then have those exchange routing information right there is an effort I don't know if any of you saw it it was a session the end of the day not yesterday day before yesterday so today's Thursday that was Tuesday there was a session on some of the work that's going on in the neutron community around being able to add BGP routing support I mean so that's an area where we'll be working with them to to kind of standardize that functionality as we bring into NSX so let me dive into a couple of these in a little more detail I'll start with the multi-hybridizer support so again I mentioned you know there's not really a big secret that we have these differences in functionality between what we can provide in a pure v-sphere environment versus what we can provide in a non-v-sphere environment and part of that has been because you know as the company who also provides the hypervisor we can add functionality to the hypervisor perhaps sometimes more quickly then we can also add features in open source by working with the community now that's not you know I'm not knocking one or the other just different modes different ways of a development model right each of them has their advantages and their disadvantages but one of the focuses for this next release of NSX is taking some of these features that we were able to add in v-sphere and now we've invested a lot of time and a lot of effort in making them available in the open source community as well and one of these examples is the distributed firewall so this is one that in the next release of NSX we'll be able to give you a distributed firewall which is a fully stateful firewall L2 through L4 on KVM right not just on v-sphere but also on KVM and NSX will manage the rules across both hypervisors so that when you do have a mixed hypervisor environment you can define one set of security policies and it'll apply to wherever the instance lands what we've done to do this is we worked very very closely in a number of different communities one we worked closely with the OVS community we had features and functionality to open v-switch that allow open v-switch to integrate with something called the connection tracking module in the Linux kernel now the connection tracking module in Linux kernel is the same module that IP tables uses so when you create some IP tables rules that say hey I want to allow this and I want to allow related or established connections I want to maintain state that's actually being done by the connection tracker to the kernel module and so what we did is we worked with the OVS community and the OVS developers and committers to add extensions to OVS and extensions to OpenFlow that we hope will eventually make it their way into OpenFlow proper but in OVS is implementation of OpenFlow that allow it to integrate with this kernel module and then we also worked with the Linux kernel community to update the open v-switch kernel module which is part of the Linux kernel to support that stuff from the OVS piece and so this all got merged up it's already upstream in the Linux kernel it's going to be part of the 4.3 kernel that will be officially released and so in the 4.3 kernel there will be support in the OVS kernel module to do connection tracker based rules in OVS which means you can write rules for OVS that will allow it to leverage the connection tracking state that is maintained by that kernel module to allow in port 22 and related or established connections for example where we're actually maintaining and tracking the state of connections inbound and outbound to the firewall this is the same thing by the way that IP tables does but what this allows us to do is to bring it to OVS and so the reality is because it's being done in OVS anybody who leverages OVS actually gets this functionality we believe it's necessary and it's an important part of what the community has a whole needs but it will also find its way into NSX so we're real excited about that and that's a big feature one of the other things in terms of improving our multi-hypervisor support is that there is an OVS port that is currently underway to Hyper-V we're working closely with a company called Cloud-Based Solutions they're based in Italy and they've been working very very closely on bringing Windows into OpenStack which is predominantly sort of a Linux story and so they've built tools like a cloud and knit piece for Windows which allows you to use cloud and knit and the instance metadata to customize a Windows instance which is very cool but the other thing that they've been working on as part of the OVS community is actually porting OpenVswitch to run on Hyper-V they've already released a beta of that it comes with the full graphical installer where you click next next next on a Windows machine which allows OVS and then you get a set of command line tools to configure it and once that kind of is fully baked and fully ready then this will give us an avenue to be able to bring Hyper-V support in for NSX this isn't kind of fully committed yet because we're still working through the OVS port but we are looking forward at that as something that we want to do so any questions about this before I move on? I have a microphone, it's okay I'll repeat the question okay, great so the other thing that I wanted to talk about here is the high performance logical to physical connectivity so I mentioned earlier that NSX offers these gateways that provide these logical to physical connectivity so they sit at the edge of your logical network or the edge of the physical network where this edge between where that cloud sits and where the rest of your stuff sits whether it be virtualized on traditional hypervisors whether it be bare metal whether it be whatever they sit at the edge and they're responsible for taking the overlay network that is used by NSX using an encapsulation protocol unwrapping that and putting it on to the physical network and then taking traffic from the physical network and then determining where that traffic needs to go to which tenant does that traffic belong and then passing it on into the cloud in our current solution we offered a couple of different options so when you're running in a non vSphere environment you could run these things bare metal so we would give you code that you would install directly onto a server and it would run right on the metal so no hypervisor required and that would create a gateway out of that out of that server and it would start acting as your device and then we had some logical abstractions on top of that where you would host multiple logical routers on that and then we had failure zones and all kinds of stuff to make sure that this is all redundant and scale out right? If you're running in a pure vSphere environment we only gave you the option to run it as a VM and NSX sort of spins up that VM and it manages that VM and it deploys the VM for you and that's it and what we're really excited about being able to do is in the next release actually give you both options on either side right in other words whether you're running in a non vSphere environment or in a vSphere environment or in a hypervisor environment you'll have your choice of being able to deploy these gateways as physical workloads or as virtual machines or as a mix so you'll be able to say hey look I might have some customers that really really need extremely high performance I want to stick 40 gig nicks in these guys for example maybe that's a use case where you need a bare metal gateway you may have other use cases so the consumers where that throughput isn't quite as necessary and say okay you know 10 gig connectivity is fine or multi gig connectivity is fine let's deploy this as a VM or as a cluster of VMs that's fine right so both form factors are going to be supported regardless of which form factor you choose what we'll be incorporating is support for something called the Intel Data Plane Development Kit or DPDK this is a hardware offload mechanism that was developed by Intel that allows you to offload a lot of the packet processing that occurs onto the underlying CPU and this gives us very very high performance even when it's at very small packet size small packet size is kind of the Achilles heel for software switching right if you know it's one thing to say I got a 1500 byte packet but it's another thing entirely when you're looking at 64 byte packets now in an open stack environment you're primarily looking at 1500 byte packets right maybe 9000 because you went jumbo frames so DPDK will offer some great performance benefits it's when you take that sort of stuff and use it in more of an NFV oriented environment where those smaller packet sizes are far more prevalent that you'll really begin to see some huge performance improvements the other thing that you'll be able to do is use ECMP and what this means is you'll be able to take a cluster of these gateways and create active active active uplinks out to your physical network right and then assuming and this is the caveat with ECMP like ECMP is not the freebie that a lot of people describe it to be but assuming that your packet your traffic distribution allows you to leverage all of those uplinks equally then you'd be able to be able to pass multiple 10 gig interfaces worth of traffic in and out of the network at line rate we've talked about doing like up to eight links so 80 gigs of throughput in and out of these logical gateways which is pretty significant right now again I made the caveat that it depends on your traffic flows right one to one traffic flows like one instance in your cloud talking to one server outside your cloud it's not going to do it ECMP is not going to do anything for it any vendor that tells you that ECMP is going to fix that is off their rocker because ECMP only takes only helps when you have multiple traffic flows that's just how it is but nevertheless it does provide a great way of scaling up the traffic when you have multiple instances on your logical network that are exchanging traffic with multiple endpoints outside the logical network so any sort of many to one one to many or many to many traffic flows will all benefit from ECMP all traffic flows one to one as well as many to many or many to one will benefit from the enhanced hardware offload that DPDK will be able to provide some of the stuff that we're talking about which I think is pretty cool so that's really all that I have I'll be happy to address any questions that you may have at this point feel free to stick your hand up or catch me afterwards and since you guys know that we're asking questions I don't need to do that slide up here's my contact information you're more than welcome to contact me if you would like I know a lot I'm based in the states and I'm here in Japan this week and in New York City next week and in California the week after that so it may take me a few days to respond to you but I will get back to you and if you have any questions feel free to let me know but otherwise I will go ahead and let you guys get out of here move on to your next session I'll stick around for a few minutes off to the side here if you have any other questions I'd be happy to address those thank you very much for your time