 Hello, my name is Eric Lopez. I'm a solution architect for VMware. And my co-presenter is Justin Beditz. He was a developer on the open V-Switch team that's with VMware. The person who is supposed to be doing this talk, SOMIC, he's not here today. So I'll be taking over doing this presentation. 40 minutes. 40 minutes? OK. We thought this was going to be a very quick session. So what we're going to do is do a deep dive in terms of OpenStack and VMware NSX architecture and actually how we provide network virtualization for OpenStack and why it is actually better using VMware NSX in your OpenStack environment. So we have five key points that we want to talk about. Talk about OpenStack and VMware. Then we'll do an introduction to OpenStack Neutron. Hopefully some of you are familiar with that. We'll talk about some of the use cases on that. Then we'll go into a deeper dive in terms of OpenStack Neutron and how NSX plays with that. And that will give you an idea of actually why NSX is very important for enterprise type applications. And then we'll do what's next. And that's what we'll cue into what Justin's going to be talking about in terms of Hyper-V and Docker containers and things like that. What is the next generation? What's going on with NSX? And then we'll do a slight demo showing you how Hyper-V is interacting with NSX at this point as well. If we do have some time, I actually can do a live demo part and show you actually a full integration of multi-hypervisors, ESX, Docker containers, Hyper-V, KVM. We can actually do XANA as well, but I don't have that in that particular demo. And if you were ever at the VM world, we actually did that whole exact demo during that presentation in San Francisco as well. So the key thing about OpenSec and VMware SDDC, so the Software Divine Data Center, so it's for us to provide a framework to assemble sort of an AWS type infrastructure for you to provide your infrastructure to the service to your clients or your tenants. So this is for developers. And you have all the essential component variations that are available. So Horizon, all the CLI tools for all the different projects, Nova, Neutron, Cinder, Glance, but the underlying infrastructure can be whatever is that you're doing. So typical, it's KVM, Linux Bridge architecture, NSF, we're doing the providing the Glance as well as the Cinder part. With NSX, we actually take over the Linux Bridge and we actually do provide all the plumbing for the network environment. And then using NSX as a hypervisor, we provide sort of the ESX hypervisor as well as the vCenter for the scalability issue. And then you actually use any type of the vCenter type of storage. This can be a third party that plugs into the data storage environment, or it could be the vSAN product as well. So you can do your application provisioning management, developer tools, service API. So any type of, in terms of your application developers, your DevOps teams, or whatever that are providing, they have all the open APIs that are currently available for that. So what we call this infrastructure is currently in beta. It's called the VMware Integrated OpenSax, or the VIO. If you want to get information about that, talk to our product managers, go to our booth and talk to them about how to be part of the VIO beta. So that's sort of what is OpenSax, how we integrate with it. Now we're going to talk about a quick overview of OpenStack Neutron. So if you're looking at terms of large scale environments, so hundreds, thousands of tenants, thousands of networking switches, and things like that, there's a scalability issue with sort of the Nova networking. So in terms of that, Neutron improves on this type of environment. So large number of tenants, you want to do L3 type routers, security, so you want to do security groups, things like that. Load Balance is a service, VPN is a service. Neutron provides that basic functionality and that plumbing for you. So you can actually have multiple tenants using multiple logical routers, have multiple logical switches as well. Load Balance or VPN services and things. We support overlay. So in terms of instead of doing VLAN type isolation, now you have the ability to do L2 over L3 type network. So these overlight networks. So you can use GRE, VXLAN, and then the other tunneling protocols that are possible. So it removes one of the major limitations of using VLANs, which is the 10, 4,096-type number of address spaces that you can do or the virtual LANs that you can use. And now you have IP connectivity over this overlay network easily in that type of environment. So from one hypervisor to another, it uses this overlay network to communicate to another VM on a different hypervisor. And it looks like it's L2 adjacent on that system. And then you can have any type of open solution. So this could be R, VMware NSX plug-in, could be the Linux Bridge plug-in, OVS plug-in, as well as Cisco UCS plug-in, the NEC ROO plug-in, or any of the third party providers that are providing the underlining tunnel network. So that's the use case in terms of using Neutron. This allows you to get the scalability that you want and improves on the current limitations on Nova network. So and now using OpenStack Neutron with NSX, we can actually take a look of actually what the type of environment that this does as well as break some of the limitations that currently the OpenStack Neutron has with using the base reference platform of OVS. So in terms of our application, NSX is actually a highly distributed application. We have this control cluster that controls all the flows in that environment. It actually learns about the different points and how those systems are supposed to communicate and programs the edge devices, which are your hypervisors, to actually know how to switch the packets around in that environment. So this high availability cluster is designed to scale. It's fully active active. So you just add more nodes to actually, as your workload increases, you can add increased control clusters in that environment. We actually have also a management console. This is a different management console. It's mostly used for troubleshooting and actually operational aspects of that environment. It's still used as all the leverage of the APIs that are from OpenStack. Can communicate directly with the controllers. We have our own API that the controllers provide to provide any type of API call. So it's very easy to integrate into any of your own home grown dev platforms or dashboard platform that you have. So if you have typical workflows, you can use our APIs to call that. You don't have to use our manager or you don't have to use the OpenStack APIs. That gives you that flexibility. The second thing is smart tunnels. So we actually have the ability to do some additional encapsulation method. One is STT to actually increase the performance between the hypervisor to get near wire speed performance between two of the hypervisors, as well as any other encapsulation technology that's going to occur, so VXLAN, as well as the next generation encapsulation technology. And then we actually have the ability to do some of the issues with the VXLAN was we have for broadcast multicast type traffic. We have a service node that allows us to replicate that packet in the environment. So each of the hypervisor nodes are not dealing with specific broadcast multicast traffic. Those get offloaded to another node to actually do that replication and does it smartly. It's a fully resource scalable solution as well as that environment. And then we actually have the ability to do L2 and L3 type gateway services that are in a highly available state. So these services are active-active. So if you lose one of the edge gateways, all the information gets synced between the systems. And a new system will take over for that particular process. So you actually have this highly scalable infrastructure to provide the logical network for your environment. So some of the improvements that we do in this environment, so scale, so we actually have a very high scale limit. In each of our NSX domains, so those are the control clusters that are controlling this, 15,000 logical routers. So that's a lot of logical networks that you can provide. And then you're going to have 60,000 logical ports or VMs that can attach into those 15,000 networks. This is all fully active-active. The other thing is high throughput. So you can actually bond together two 10 gig nicks and you can get near wire speed. So essentially, 20 gigabits between those two 10 gigabits that are bonded. So if you need that wire speed communication for data transfer in the tenant network, we can actually provide that type of scale. The third thing is now we actually have optimized traffic for doing the L3 and security. So based upon implementing the security policies, those are done at those edge points in terms of where this packet is being processed. So if a packet comes in, you have the security policy for any address or egress, they'll be done either at the source hypervisor or at the destination hypervisor, as well as distributed logical routing. So instead of going through a node that's doing your L3 type networking, it does not get tromboned in that environment, but now actually you can have the hypervisors transmit that packet directly to each other. So we actually have full distributed logical routing in our environment. So we optimize the traffic for both security policy as well as routing between the hypervisors. So a lot of the east-west traffic that is served by that. So as VMs communicate to each other, if they're on the same hypervisor or if they're on the different hypervisors, we optimize the traffic pattern for that. So and lastly, management. So the ability to actually manage a lot of the system, when you start looking at the scale, how do you actually troubleshoot operate this type of environment? We have our own management layer that actually allows us. This is what the NSX manager does in terms of that aspect. So as we offload specific data path information, we're actually able to learn information about the environment. So our whole design is for management and high availability for those type of enterprise applications that you're looking at. So management monitoring tools. So there's specific monitoring tools, like I mentioned before. Stats, you can get stats on the logical port. Port connections, now you can actually do logical port connections and port connection tools to actually see in the underlay actually how they're communicated and actually how the environments are actually communicating to each other. We can actually inject packets into the environment. You actually do a trace flow to see where the packet gets propagated in your environment. And you actually can do different types of packet signatures on that as well from the different NICs. Packet size, packet length, so it gives you that troubleshooting and operational aspect. And then we actually have the ability to mirror logical ports as well, to be actually the mirror logical traffic to different points in the environment so you can use your existing application tools for monitoring to notice that particular data patterns or whatever that you do in your typical environment. So security policies for compliance monitoring, things like that. IDS, IPS, you actually offload into your current working environment from the logical space. And then we actually have the ability to upgrade our NSX components. Since we are a very highly distributed application, we have what we do, an update the process of upgrading to the next version of the software. So we'll upgrade the different components in our environment in a timely manner for you guys and actually help operate that aspect. And then we also integrate with other different bare metal tools. So in terms of networking services, L3 with static routing, L2 logical networks, we actually have hardware VTEP devices that we can actually plug into as well. And then if you have any type of ACLs, QoS, we actually able to provide those additional services as well in terms of bandwidth rolling on a particular VM process, as well as doing some of the DHCP process. So if you have differentiated service protocol that you're using, you can say from the particular VM environment, you can actually re-replicate that packet data onto the underlying infrastructure. So the underlying understands what type of packet is being propagated in that environment. So we have the DHCP markings that are occurring at that. So next part. So as Eric mentioned, my name is Justin Pettit. I'm one of the core OVS developers. And the key part of how to get NSX to work on multiple hypervisors is through OVS. And so I'm going to talk about today is some of those efforts that we've been putting into getting OVS to run on more platforms. So we've been focused lately on support for DPDK, Hyper-V, and Docker. And so today I'll talk about the last two of those. So with OpenV, Switch, and Docker, we've begun working on integrating OVS and Docker already. So a few months ago, Aaron Rosen, one of the core neutron developers, added support to OpenStack for OVS and Docker. So in this case, it pretty much just looks as far as OVS and NSX is concerned that it's just a standard VM that's connected with vets instead of being vifs that are usually used in VMs. To the OVS repo, we've added documentation and integration scripts to work with Docker and OVS. And we've begun working with the Docker community to see about adding OVS as one of the supported back ends for Docker. So the way that we view Docker is that it's good for instantiation, for well-known configurations. But the security can be a little bit more difficult, because you have to protect against the full Linux API. So the way that we've been viewing things is that if you have your security domains, you want to have those. The containers can be adjacent to each other. But if, for example, you were running, if you were a service provider and you had different tenants, you may not want to run those adjacent to each other and instead run those in different VMs or on different bare metal hosts. And I'll talk about those configurations in just a second. So the first one is the model that I mentioned, where we're running on bare metal. And so these, this laser's not working. Anyway, these different Docker instances are connected to Vs into that OVS. That's the green circle there. And as I mentioned, to OVS, this just looks like if it could be a VIF or it could be Vs or it doesn't really matter. They're just interfaces. And then we associate information from the cloud management system that identifies what the other end of those Vs are attached to, to OVS, that then sends that information to NSX so that you can implement your policies. And so this is supported already. But depending on how concerned you are about these things, we'd recommend that these different Docker instances are in the same security domain. Just because if one of them breaks out, then it could affect the traffic for the others in that green switch. Another model that we're looking at supporting is if you want to run multiple Docker containers, but this time in a VM. And in this case, if any of these Docker instances breaks out, if all of the policy was implemented in that red OVS, then it could modify the traffic and modify the connectivity between things so you could break down the isolation that you're looking to provide through something like NSX or any of the logical networking services that you may want to run. So what we are proposing is that in these environments, what you would do is use the red OVS to tag the information to identify which of the Docker instances they are. And then that information would then be relayed down to that green OVS that then actually enforces the policy and then connects the tunnels. So this requires more work with OVS and the upstream Docker community, but we are beginning to work on that. So then I wanted to talk about our port of OpenV switch to Hyper-V. So this is a collaboration between VMware and CloudBase. And I mentioned that this is the largest external contribution that we've had. And the reason it's really external from both of these is that the core OVS developers are really more Linux people than Windows. So the group that actually implemented in VMware, the support for OVS, was a different group from the core OVS developers. And so VMware and CloudBase had each developed their own port to Hyper-V. And so what we decided to do was merge the efforts together. And so both of these teams proposed changes. And then the core OVS developers sort of went through and made suggestions and then have pulled code in based on requirements or based on the feedback that we've given. So it's the newest supported platform. It uses the same user space. So OVS vSwitch D, which is actually where most of the code for OVS resides for switching, is exactly the same on the platforms, except now it's a Windows binary. And all of the utilities, if you're familiar with them, are the same. So OVS OF Kettle, DP Kettle, and VS Kettle, they're all identical, except now they're Windows binaries. The data path is implemented as an internal forwarding extension to Hyper-V's native switch. So the performance should match that of a native switch on Hyper-V. So all the code is upstream in the master branch. We're going to include it in OVS 2.4. But the initial release, it's not feature complete yet. And I'd only recommend using it for testing purposes. But we're moving quickly towards we want to get feature parity and have it work just as well as any of the other platforms. And then finally, all the code for making it work with OpenStack is been done. But it hasn't been sent for review. So hopefully that will happen soon. So I should have a demo of using NSX and Hyper-V together. So in this, in the demo that you'll see, there's two different hypervisors. And these are Hyper-V hypervisors in blue here on the bottom left. And so there's Hyper-V R3 and Hyper-V R4. You can see these have 10 dot addresses. And they're connected together with a VX LAN tunnel. And there's three VMs that are one on R3 and two on R4. And then those ones have addresses in the 172 address space. So this is the physical layout. But then logically in NSX, we configured them these VMs to be on the same logical switch. So when we run the demo, you'll get to see how that's done. So here we are on the Hypervisor R3. And you can see that VS Kettle is run here. And down here, there's the OBS R3 port 1 in VS Kettle. And this is the database that shows the configuration. And here with the OBS DP Kettle command, which communicates directly with the kernel module, you can see that the port is created as well. So now we're going to go to R4. And we'll go back to the Hypervisor in a second. But these are the two VMs that are running on R4. So the first one has the address 172.168.1.4. And the other one has the address 172.168.1.5. So remember, there are two VMs here on this R4 host. So now we'll go to the Hypervisor window PowerShell. And you can see that there are two interfaces shown in VS Kettle. There's the OBS R4 port 2 and OBS R4 port 1. Both these exist. And then if you run the DP Kettle show command, you'll see that they both exist in the kernel module as well. So now what we'll do is we're going to go to the NSX screen. And so first we'll look at R3's configuration. And so there's a little bit of delay here from the VNC session. But you can see that this is Hyper-V R3 in the NSX console. And there was one VM on R3. And you can see that. And it's a little hard to read here, but it's down here. It's specified as OBS R3 port 1. And then here this is for R4. And here are the two ports from the two VMs that we're seeing. And then this is the logical switch we created that is VXLAN based, called demo VXLAN Hyper-V. And now we'll see that those three ports that were on those two different hypervisors are listed down here. And so all the connectivity looks good on them. And you can see that they're for the R4 and R3. And so now what we'll do is we'll go into R3, which had the one VM, and we'll ping in the logical space one of the VMs on R4. So we have the address 172.168.1.3. And we'll ping 172.168.1.4. And all of that traffic will go out, go over the tunnel. And you'll see that as far as those VMs, even though on their different hypervisors, they think that as far as they're concerned, they have local L2 adjacency. So this demo just is showing running two Hyper-V hosts. But it would work as well if we had ESX and KVM and Docker. We have all of those working. And you can connect any of those together. And they would all appear on the same logical L2 adjacent, if that's how you configured it. So that's what we're working on for multi-hypervisor support to show what's coming. If you wanted to go into your demo slides. OK, I think that was all I had. So in our environment, we actually can do multiple type of hypervisors. And that's kind of the key thing. With this NSX product, you can actually plug in any type of hypervisors you want. We do work with a lot of different companies as well, providing the different type of OS support in that environment. So thank you for your time. If you have any questions, please feel free to ask. Any questions? Thank you.