 Okay we start in a minute. Can we close the door so for the noise that will be better? Thank you very much. Okay thank you. So we're here to talk about network virtualization and how I will say sometimes Nysera my bad. I'm Dimitri I'm a technical marketing engineer and I work for the NSBU so the network and security business unit within VMware and I come from Nysera and sometimes I still say Nysera my bad. So we're here to talk about the MVP plug-in the NSX plug-in within OpenStack but just before that we'll start with a quick introduction on what VMware does in this overall OpenStack. Okay good afternoon folks I'm Samick Behara. I'm a product manager in VMware's Networking and Security Business Unit. I was as I was saying in the previous panel a panel where some of you were already here. I was one of the founding members of the OpenStack's Neutron otherwise known as Quantum Project and I've been familiar with this space for a while. I wanted to just give a preview of our VMware's philosophy with OpenStack before Dimitri dives deep and gets into detail and does all the work but this and you know so make sure that you guys understand everything. So what do you think about OpenStack as VMware? Our belief is that OpenStack enables customer choice. It's an open framework. It's a framework it doesn't come with it's not a product. So you still need an hypervisor right? You still need storage. You still need sand. You're still in networking and VMware sells the best of breed solutions in each of this infrastructure space. So it made just the logical choice for our customers and for us to make sure our product portfolio of software defined data center solutions is very well integrated into OpenStack. We actually feel it's a net new market opportunity for us and it's an accelerator for our customer who are trying to adopt OpenStack and bring it to their environments. So that's what we feel and in order to do that I mean peeling the onion a little bit but what it means is that the components over there in blue are VMware components the whole VMware blue. So as a customer now you have choice to take our VM integrations with each one of these OpenStack components and use all VMware blue entire stack or pick and choose what you want right? At the end of the day it's about customer choice and how you want to build your own cloud and we believe that you should have the choice and with for that we introduced this release plugins for Nova you know NSX for Neutron. We have vSAN and other storage technologies for Cinder and Glance as well as operations tools which we hear from customers are big hurdle to taking OpenStack into production. We have plugins for vCenter operations provides predictive alerts and management log inside. It's about log analytics and so that you can debug these logs and we believe this essentially means that you know you get best of both worlds essentially right? OpenStack has this great framework, it's an open API, it's a great vibrant ecosystem provides choice and VMware has a best-in-class virtualization and the core engines that power the world's largest and most reliable and robust cloud environments. With that I will let Dmitri take over. Thank you. So let's focus now on what VMware does in the network piece okay so if you as you know inside OpenStack you have actually multiple projects and I'll focus on the network component which is called Neutron and you may have heard or you may remember the old name Quantum so it has been renamed that's the same thing so I'll focus on what VMware does on that block. Okay just one quick word actually in the previous session people talked about Nova network that was the first option you had within OpenStack to provide network as a service and it's still available today the last session talked about the limitations and wide-school and you have many benefits to use Neutron instead of Nova network so some limited network topologies I won't even mention them you'll have the slides you can look at that but limited network of topologies so you can host on your cloud with Nova network just a subset of what your applications are the big big one is you don't support with Nova network three-tier architecture with you know front-end web tier a middle app tier and a back-end database it's not supported with Nova network you don't have a rich number of network services such as you don't have load balancers no VPN and doesn't scale very high we can talk about that offline if you want no integration with third-party to overcome those limitations and this one is not a big red X because you have some HA and some management tools but very limited and rudimentary so anyway that was the past in OpenStack you have Neutron that enhance or that enhance Nova network and you can do more than that so we talked about briefly about the three-tier architecture it's something you can host now in your within your OpenStack there are more network services as well in security you can protect or make rules to limit the traffic from outside to inside and you can make rules to protect also traffic from inside to the outside world where with Nova network is only from outside to inside you have load balancer as a service you have coming VPN as a service or richer network services with Neutron there is also the ability to create an overlay so when you create a three-tier architecture like here and you have that for multiple tenants instead of using and consuming a VLAN for each subnet on your physical fabric then you can create an overlay and when a VM let's say VM Orange talks from 10 10 10 to 10 10 10 11 they talk together they're in the same subnet but on two different hypervisors the physical fabric will not see the IP will not see the MAC address of the VMs but we'll see the IP of the hypervisor one talking to the hypervisor two so more subnets you create more network topologies you create you don't touch the fabric so that's pretty nice and within Neutron the encapsulation down is GIE okay now also if there is any other limitation or things you want to do that is not available via Neutron then there is the ability within Neutron to expand your capabilities adding plugins some of them are open source OBS plugin is a popular one some of them are vendor based and on the top of the list you can see our plugin the VMware NSX plugin which was known as the NYSERA MVP plugin in the past you may have heard about that so that the same thing it hasn't just been renamed but some other vendors Cisco plugin is there somebody talked in the last session about some changes done with big switch they have a plugin as well NEC so different plug-in it's an open solution so let's talk about why people will use Neutron with our plugin okay why people do that so people do that for multiple reasons our plugin is really well known for scale so of course I will talk about that but not only that's not the only reason why people are using our plugin so scale let's go briefly around that first on the size of your cloud so you can have really a very large massive cloud we support as of today and it and it's enhanced in each release and we do a release every three months up to 60,000 instances so up to 60,000 VMs with 15,000 different tenants and hosted on different 1000 different hypervisors and that's actually per NSX domain so you can have multiple of NSX domains to go even higher today and as I say those numbers go up in each release and what I mean how do we achieve that it's thanks to a controller cluster so the brain of this is done by the NSX controller and it's a fully distributed system that helps us to go very high then there is okay the size of your cloud but there is also how much you can push out of each hypervibre because the VMs need to talk to other VMs within your cloud so we use encapsulation as well I talked about encapsulation available within Neutron we do offer also encapsulation we don't use this GRE encapsulation we use another type of encapsulation called STT it's open sourced and this has a big I mean is doing encapsulation like Jerry but the big difference is it's not using the CPU of the hypervisor to encapsulate the traffic but it's using the NIC capabilities to encapsulate the traffic so when the traffic goes out we don't consume CPU of the host of the hypervisor so the CPU remains available for the VM compute which is what it designed for the hypervisor so we can saturate a 210 gig NIC bounded so 20 gig out of each hypervisor and then your cloud may need to talk to the external world I mean VMs your web VMs will talk to the app VMs and do their application stuff but people will access the VMs or the VMs will download some patches or stuff from internet so you have north south communication and these goes out via what we call NSX gateways and those are also in a cluster active active so we can do more than 10 gig out of each and if you need more you just add nodes in your gateway cluster okay for massive scale and those gateways can offer L3 communication to the outside world to the physical world or L2 communication as well I'll talk about that in a in a moment so you can have a VM in one subnet talking to a physical server a physical server not in your cloud anywhere in the world a physical server and have L2 adjacency between the two okay so I'll finish on that bullet for the scale section optimize traffic what do I mean by that I'll give an example what you have out of open stack is you can do as I said with neutron not nova network but with neutron out of the box if I may say topology like that and if the database here wants to talk to a VM in the app network the traffic will go to its default gateway which is hosted on the neutron server and will go to the VM app so in the physical world it will go from this hypervisor to the neutron server back to hypervisor one where you have the app VM make sense if the web VM talks to the app VM in the physical world go to not neutrons back to hypervisor one and you can easily see that neutron server where you have all the different logical routers for all the different tenants may easily become a choke point when you have lots of east west traffic or lots of L3 traffic because everything will go through that box through that server with NSX we have the concept of logical router distributed so when you remain within your cloud so let's use the same two examples the database talk to the app server and the database is on hypervisor two the app server is on hypervisor one it will go to its logical router which is here on hypervisor two and goes straight to hypervisor one and doesn't go through our NSX gateway this one will be used only for one use case and I'll talk about that in one minute but not that use case because I stay inside the web VM goes to the app VM and let's say they are on the same hypervisor this logical router or range turn on B is also here and it actually doesn't leave hypervisor one the real fabric network will not see that L3 traffic it remains within hypervisor one so when do you go through our NSX gateways that scale massively as I said because you can have them in a cluster they are actually not used for inside traffic within your cloud that's used for the outgoing traffic when those VMs want to go to internet to download some patches or whatever then that will use our NSX gateway only not fast okay so we talked about scale some people enjoy NSX for also our ease of use to provide HA and management monitoring so HA for the control space as I said before it's offered by NSX controller clusters they work in an active active fashion that's why we scale so high in terms of size of your cloud and they work in a distributed mode but if one of them dies automatically the remaining nodes are aware thanks to health check are aware of this node dead and will redistribute the load among the remaining ones there is no interruption in your cloud activity activities when one node dies same thing for the transport for the VM traffic not the control but for the VM traffic for the production traffic so as you remember the communication from inside to outside is done via gateways NSX gateways they work in a cluster for massive scale also same concept if one dies the remaining nodes will take care of the task of that of that one and it's done in a very fast manner actually we'll do that in a demo so it's not like you you will need to deep to restart a new node and wait for that node to join and and get the traffic and lose prediction prediction for for many many seconds or even minutes okay so management is also something that is enhanced with our plug-in so it's cool to have a cloud running but I mean one day you may have a customer that calls you and say a my VM cannot talk to my VM and you have to troubleshoot understand why so we have many many tools that are useful to to use open-stack in production and not only in a lab a science lab so I mean statistics okay in out bytes packets but some other stuff like this one is very popular port connection when you have a cloud you may lose where all stuff because you deployed actually you did not deploy your tenants self deployed their infrastructure within your cloud and you don't know where stuff is so when you receive a call and okay my VM cannot talk to my VM this tool will quickly tell you for that tenant on port two to port seven those are those two VMs are for this one on KVM one for this one it's on ESXI 3 and you know where physically they are and then you know when they talk the communication on the physical network on the physical network is not the IP address of the VMs it's the IP of KVM one talking to the IP of ESXI 3 because the traffic is encapsulated and we actually check every 300 milliseconds you don't have to remember that number if the tunnel is alive and working fine so you can see it's working fine and the communication those two VMs should talk now let's say it's still not working so we have some other tools like you can ask or you can generate a ping from that VM to that VM to validate the traffic is going through the physical network and through the two hypervisors and you can do that without without you asking the customer to give you root access of his VM or asking the customer to do anything you can generate yourself traffic and this traffic will really go through your fabric your network to the other hypervisor and get a green or red flag you can generate more than ping yourself not the tenant yourself can generate a specific packet to validate its well received to the other side you can I don't know if you're a network guy but if you're a network guy you played with those physical boxes and you know how to do your span to send traffic out of a port on a physical switch to your PC where you have wire shock and you can see stuff you can do that but now from the logical world so you can say this VM logical switch port send all the traffic received and sent from that port to my PC and I analyze that if the VM moves because you have some hypervisors that does support V motion or does some dynamic placement of VMs like the center then this configuration remains because the logical switch port remains the same so the port mirroring will still work and you will still receive the traffic from that VM even if the VM moves and I don't know how long you've played with open stack but if you've played an open stack for sure releases you know what it is to upgrade open stack so and the challenges it could be to upgrade our solution it's actually a very simple process because built-in we have the ability to upgrade the NSX solution through a nice automated UI and if you remember each element is clusterized so this will upgrade a node at a time so you can upgrade your entire NSX cloud without interruption doing it smartly automatically one at a time okay so we talked about scale but we talked also about other points we are not as well known HA and monitoring some other things and I'll finish with that we offer also some network services you don't have out of neutron static routing so you know what it is so I won't spend too much time but you can create over tenant can create himself a default gateway and route to go somewhere else for a specific subnet or black hole you cannot go to that subnet okay I briefly talked about that we do support L3 from logical to physical we support L2 from logical to physical so you can have a physical server in a data center somewhere in an IP address 10 10 10 10 and a VM in your cloud 10 10 10 5 and they have L2 communication okay through an NSX gateway we can explain that I can explain that if you want later how it works in the beat and bite but we do offer that with some service providers actually one in Asia offering that to his customer ACL access control list you know what it is so I won't explain that but that can be combined with security profile so that gives you more flexibility and more use cases you can host actually you should be able to host any kind of application your customers have within your open stack cloud with the NSX plugin QoS in two flavors guaranteed traffic for a VM max traffic for a VM so that's within your cloud and QoS also on the fabric so if there is any congestion in your cloud on the physical fabric because there is too much traffic going on we can tag the traffic so the physical network will differentiate traffic from orange to blue from a tenant orange from between orange and blue tenants using DSCP okay and we have also I'll finish with that before doing the demo optimization of broadcast and multicast so if you have a cloud and with multiple hypervisors and a VM send a broadcast you don't want that broadcast to go everywhere because the let's say the tenant blue the VM blue light blue sends a broadcast there is no point of sending that broadcast to hypervisor one hypervisor one doesn't have a VM light blue in that same subnet that same network so again our controller cluster is smart enough to know that and will tell the hypervisors to send it only to hypervisor two and not one so we distribute broadcast and unique and multicast in a fast in a fast and optimized manner because doesn't go everywhere okay so I think that was roughly 20 minutes now the demo and then the Q&A so I have multiple demos I'll try to go fast and we can go deeper if you want at the end of the session the first one I want to show you is what I talked about a multi-tier application okay hosted in your cloud on a pencil stack have an app with a VM web in the app in the web tier two VMs in the app tier and L3 communication to the physical world L3 communication within your cloud and L2 communication to a remote physical server somewhere else over there so that's what I will do in demo one in demo two I'll show V motion because actually some VMs are on KVM some VMs are on ESXi I'll show a VMotion and show that it's still working some management and monitoring tool and I'll do a failure of our cluster gateways to see that everything is still working fine for the tenants and I'll show you very quickly that everything does not change you still use your open stack have an ad dashboard horizon or any dashboard you have because that's actually the same API code okay so let's jump to it it's solid 55 so that's your let's sign out because I guess I timed out okay so I go to my open stack I go to tenant a and you can see that I have the two networks web and app tier and my logical router that house between the two and two internet so you can see the VMs here some are on ESXi some are on KVM let's go to a KVM quick on the ESXi oops let's go on the ESXi I'll show you where it is it's on my vCenter here okay you can see if config okay 10.0.2.4 10.0.2.4 you can see this one 10.0.2.5 okay actually if config you can already see it 10.0.2.5 they can I mean that's a ping I know it's not that impressive but you can see they have communications L2 within your cloud you can see they can ping their default gateway okay this one is actually local on the hypervisor because it's distributed if he wants to talk to another which one is it 1.5 the routing within your cloud works fine I mean for the tenant she doesn't know anything but it's distributed so that's pretty nice and can go to the physical world via L3 out of your cloud I think it's this one here we go so I've been able to quickly demonstrate L2 communication within the cloud L3 communication within your cloud and L3 communication to the outside world and let's show you quickly L2 communication to the physical world that will be a ping not as impressive as well but just to show you that L2 communication talks to the physical world here we go there is one here perfect now let's go to the demo number two quick so let's do a V motion and let's see if something broke so I'll ping something in the physical world 10 101 211 here we go and I'll migrate the VM I don't know where the VM here is now the VM is on 122 I'll move it quick to 121 okay so the migration is in progress the ping is still running and oops the I lost communication to my VM that's nice it's not what should not what should have happened here we go the ping was still running and you can see that I lost two pings 37 received 39 39 cent 37 received that's the two seconds that took my V motion to to happen and and so that has been transparent you haven't touched anything so if you have a hypervisor that does dynamic placement of VMs to place VMs in a smart way in a dynamic way then you can still use that and nothing change in your network virtualization site now let's do some troubleshow some troubleshooting and monitoring tools we have out of the box so if someone calls you a my VM app one cannot talk to my VM app to the case I gave before let's let's look at that oops that's not where I wanted to go my screen is too small sorry for that so I select it's really too small switch names so your customer calls you and you say okay I'm turn at a and my two VMs on the network to don't talk together so fine you select the two VMs of turn at a network to and hopefully you have a bigger screen so you don't have to go up and down you click on port connection and automatically in real time NSX will tell you for that switch for those two VMs three and four they are they look all good and they are for the first one on ESXi one for the second one on KVM one that shows you actually mix of hypervisors is fine and the communication is working fine between those two hypervisors if they want to go out to internet via L3 they go through that cluster of gateways and everything looks cool you can do a ping and validate without asking the tenant they can really reach each other with a ping or what you can also do is go directly here and you can create your own packet if they say oh they can ping but they cannot do XYZ communication you have the trace flow and inject yourself the traffic to validate it's working fine without asking the customer to do it and or root access to his VM so pretty neat and if I redo the Vmotion you will see that automatically I wait for the emotion to finish here we go you will see that if I click go again it did update the information about on what ESX I is so now it should be ESXi 2 and told KVM about now use your tunnel to ESXi 2 and you've seen before you lost only two pings that has nothing to do with this but to the fact that during the Vmotion you lose two pings for the VM to move between hypervisors okay so I'll finish with one thing the HA so today I'm using oops remote can be on a small screen can be challenging okay I do my ping I kill my gateway one of my two gateway they're actually VMs okay and you'll see that the ping is still working fine and running and there was no interruption okay actually if I cancel the ping you'll see you lost four packets so it was not minutes or or or weeks of getting back your cloud up to speed when the node dies it's a matter of second okay and that will be always between three and four seconds when you have a failure I think we've done through all the demos I'll just show you how easy it is it's just using the same tool what I'll do is on that tenant I have only one tier I'm tenant B and I have one VM on one subnet a router that routed the two and let's say I want to create myself as a tenant another subnet and create a multi-tier architecture I create a new network okay forget about the names we're running late 10 that 0 4 0 slash 24 create here we go I created my subnet now I will add an interface on my router to go to that subnet add interface QQQ here we go I created a multi-tier application let's create a VM on that one and I'll call it a VM to this flavor and I have a bunch of images from KVM and ESXi but let's use ESXi for fun the other one already existing is from KVM hope it will complain I did not select a network here we go and voila my VM is coming and if we look at ESXi vCenter let's go to the vCenter my VM is coming you see C77 has just been created and we'll start in a second so for the tenant nothing change it can just create rich network from the UI knows for the cloud architect is just the same API calls it doesn't have to do any change in what he built okay here we go my VM is up and running and ready to use that at the end of the demo we can open the Q&A Q&A session waiting for that I'll just put some customer names we have in the service provider space and enterprise space okay yeah sure I can take that so ML2 is just an architecture from Neutron perspective how to make things modular right so NXX plug-in when ML2 is not fully a production ready to just introduced so we support ML2 and we have the NSX plug-in in the ML2 form factor as well as the traditional monolithic form factor and when the community adopts ML2 we will it will be fully operational with ML2 sure I mean it's a fully extensible platform so if you have a VM which does some services you can still you know build on top of the virtual networks of provided by NSX as well as NSX allows networking vendors even to partner directly with it within the platform at the data plane level which is a richer integration than compared to Neutron which is kind of like a managing plane integration so you from ML2 you can do a management plane only integration or a deep integration depending on the partnership the customers that exist yeah so there are multiple kind of encapsulation available again we talked about GRE offered by Neutron we talked about STT we do out of NSX actually within NSX we support multiple kind of encapsulation so you can still use GRE your performance will be much worse and I don't know why you will do that but we do support that now but VxLan I mean VxLan you could use VxLan out of or you can use actually now VxLan out of NSX and I'm fine with that that's say 100 of our customers still use STT why because of the performance the day the NIC cards will support on encapsulation on the NIC and not on the CPU of the hypervisor then I don't see why people will still use STT and it's coming there are only two vendors of NICs in the world and everything else is OEM and they are both of them working on doing encapsulation on the NIC now just one thing the NSX the value of them the strength of NSX is not because we use STT which actually we open sourced it's because of the intelligence of the controller to push all the information to the edge to the hypervisors to tell them what to do where to go the distributed router and all the stuff I mentioned the STT is just one small element of all the benefits you have out of NSX does that reply to your question there was another one in the back oh you want me to kill a controller of the gateway yeah so yeah I killed the gateway so when you go to the physical world it's done through a gateway we are actually working with partners vendors network vendors to use directly the physical switch but let's keep that on the side so when you go out you go through a gateway and they work in an active active fashion as I say it because you have multiple tenants okay so let's say you have 100 tenants and four nodes on your gateway cluster each gateway will be in charge of some tenants for the north house and another one will be stand by for that one something unique I did not highlight match the sessions are synced so if you have some net going from inside to outside or outside to inside and there is a failover the failover the new active box for those tenants will still work and their net sessions will still work nothing is lost now there is a failover time for the standby to detect the active is dead and this is not done by the controller this is done by the gateways themselves because they talk to each other and that failover time is what between three to four seconds and that's why you've seen three or four things lost okay and the key thing about that is it's a network level ha it's a network level failover that's how networks are used to it's not come like nova network or quantum network no it's not a server level ha where you have like you know something which actually uses Linux ha and brings up another server so it doesn't have to detect at that level though the time it takes is longer when server ha versus a network ha so we're talking about a couple of seconds instead of a minute of ha and the nat sessions are not lost so if you have an essay stage an ftp an HTTP or whatever still works the logical you take this one sure yeah that's definitely possible I mean you can bridge to the external world at the L2 and then external router can take over or you can have the external router point to the logical router within the NSX domain and it connects at L3 and it talks uses external router no but we're in the logical world also so I mean HSRP it's when you have two physical boxes our logical router is where a little bit everywhere you know we have the DCPT router so it just doesn't make sense okay so I think I think we're running out of time but yeah if you have any question we'll be next to the door for the next 50 minutes thank you very much thanks everybody