 I will get started a little early I guess and introduce myself so I am not Brad Hedlund who was the not quite as good-looking as him so I figured you guys would have called me out if I said otherwise but I'm Dan Wenlint. I'm Brad for our family reasons wasn't able to make it at the last minute so I was one of the original NYSERA team members was an engineer at NYSERA helped build up the technology work with a lot of our customers. Now NYSERA is part of VMware so I'm at VMware and I've also you guys may know me that I've for the past two years I was the PTL of the quantum project or the former project formerly known as the quantum project and so I'm recently just stepping down on that to focus more on VMware activities so without a poll it wait you haven't started the video yet have you got a couple more minutes I was gonna say there's plenty of room on the floor but that's actually not true if you want to be sitting on the floor you're gonna kind of have to be up here I feel like it's been 10.59 for about three minutes still 10.59 all right so it's officially 11 I want to get started quickly because a lot of stuff to cover so oh people are tweeting that's good so I'm Dan Wenlint filling in for Brad giving this talk he wasn't able to make it looks like part of the original NYSERA engineering team helped out with a lot of our deployments we're now at VMware for the past two years I've been the lead of the quantum project I'm just stepping down to focus more on VMware OpenStack activities so to give you an idea of what this talk is is intended to be is what I would usually do with a customer in our first meeting on a whiteboard I hate PowerPoint but given the logistics here I guess I've been forced to do that so I stole a bunch of slides from Brad so if it's all text and looks like crap it's from me if it's shiny and pretty it's probably from Brad you can guess on this one okay so I want to start out by talking about network virtualization because you have to understand the problem we're trying to solve before you understand why we build the system most of this talk will be about nitty-gritty technical details so most of you are familiar with compute virtualization right you take physical hardware server resources things like storage CPU RAM Nick and you use really smart virtualization software to slice it up decouple from the physical infrastructure and be able to let people automate the provision of those resources so you can think about network virtualization in much the same way you've got physical network resources you have a virtual switch maybe you even have something like a hardware load balancer you want to be able to have a soft really smart software abstraction layer that can automatically you know can be can automate the deployment of the network configuration your workload needs not just the compute configuration right because you're only kind of as good as your weakest link and if you have only virtualized your compute right but you still have to wait days for your network to be deployed you haven't really solved your problem so that's the marketing definition of network virtualization this is what I think the technical definition is and you'll see how this kind of manifests itself in the design so network virtualization is a faithful reproduction of the properties you have in the physical network so do you have you know can the workload tell the difference between it being on a real physical network and on a virtualized network I would argue that if you have real network virtualization the answer there would be no just like a you know OS workload can't tell the difference between it being on a you know physical CPU and a vCPU you have to be fully isolated just like a virtual machine has has memory that it can't possibly address the you know the memory of another VM you need full isolation so that you know no network might address another tenant's network or even two tenants could use overlapping addresses could you basically take one network environment and just clone and deploy another you need to be able to have place workloads in a way that isn't physically dependent on the location within the network for example can two VMs be on the same you know virtual layer 2 segment even if they're not on the same physical layer 2 segment can you migrate a VM from one layer to segment in the physical world to another without its connectivity being changed likewise is there any you need to be independent the physical state of the network so you know do you if you have to touch you know your physical hardware in order to deploy a workload or to apply a firewall policy your network probably isn't virtualized right though when you've truly virtualized a network all that state is pulled up into the virtualization abstraction layer and none of it's in your physical hardware and one other thing just to call out what I'm not talking about when I'm talking about network virtualization I'm not talking about simply running networking you know software in a virtual machine there are valid and reasonable use cases for that but that doesn't really change the operational model on it's really just that you're consuming x86 cycle cycles rather than dedicated asyx cycles okay so now what I'm actually here to talk about which is the NICERA network virtualization platform so it's a it's a software networking platform and we'll go into all the technical details that's compatible with KVMs and server and VMware and we released 1.0 back in July of 2011 we you know many people know we're part of the rack space deployment which is the largest open-sack deployment that at least I know of one of the really cool things about MVP is that we release four times a year in fact it used to be eight but now that we're in where they made a slow down to four you know and it's me it's a great example of how you know you've heard the phrase softwares eating the world right think about the amount of innovation the set of the set of features you get in the time that would take you to go through a hardware refresh cycle so our current release which we'll be talking about today is MVP 3.0 so that was our Q run release we're just about to put out 3.1 which will be available on the vapor so this is the kind of this will be our visual outline for today this is the entire MVP stack that we'll be talking about so this is kind of you know from the most basic and most physical to the highest management layer so kind of physical network to management layers and we'll be kind of taking a path from the bottom of the stack up to the top and then circling around to get some of the other environments and before I talk about a given component I'll highlight where we are in the stack so hopefully you can you can keep our context so here's a very high level view of what a MVP deployment will look like the first thing to recognize is that from a physical perspective we treat the physical network just as a layer 3 fabric all we care about is IP connectivity between different points at the edge so if you guys are familiar with open v-switch which we'll talk about a bit more that's in all of our edge components so in our hypervisors in these things that we call gateways which are how you get in and out of this virtual network space those are all data forwarding elements anything that has OBS in this picture will deep dive into each of those and then there's the MVP controller which is software on control plane only never handles any packets that manages all those OBS's and can be driven both by something like open stat quantum and from an operational perspective with an operational tool I'll show you as well called MVP manager and basically the goal is that tenants will describe logical networks that they want via the quantum API's and MVP will manage in a very detailed way the flow state of all these OBS's and will create tunnels across the physical all three fabric so that the you know the workloads themselves believe that the network that they're connected to looks like their logical model not this much more complicated physical model so in everything we do that there's a non-virtualized view which is what we call this and then there's the virtual view the virtual is kind of the platonic simple view that someone would create via an API just describes what they care about none of the other complicated stuff so you can create as many of these virtual network containers as you want this is you know an example of a three-tier web application that uplinks the outside world buying that you could clone copy you can a multiple tenants who create multiple of these they can over I can use the same addresses they never conflict it's just like a virtual machine they're an entirely isolated container you know you can make modifications to these guys let's say maybe you have one that doesn't want to uplink the internet it wants to uplink to its remote you know customer premise right so this is the difference when we say the virtualized view which we're talking with the logical API abstractions you define and the non-virtualized view which is the kind of the real world where we're actually forwarding packets so again like I said we're gonna start out working from the bottom so what MVP aims to let you do is treat your physical network like you treat your compute servers what does that mean it means a couple things so first off you can treat it as a big pool of capacity to be sliced up on demand flexibly for tenants right what do you do when you buy your computer capacity right you look at some price performance ratio you say oh it seems like a good deal I'm gonna buy a bunch of racks there you pull them in and then you just come programmatically you do one time set up and then the rest is just programmatic spinning of workloads of you know onto that to consume that capacity then you consume the capacity and you move on so secondly we rely on only on commodity features specifically L3 forwarding capability within the physical network why that's important is it means you can get your network hardware from anyone right you can buy it from one person then when you're building out your next data center you can decide to buy it from another right you're no longer really tied to a particular network vendor because just like you aren't necessarily tied to a particular server vendor because you have x86 is your simple commodity interface to buying that server hardware thirdly the configuration of the physical network should only be done once just like your physical servers right you're vacuum you give them IP addresses and you're done you never touch it again there should never ever be a human in the loop right when you're provisioning application workloads right if you have a human in the loop and you're racing against Amazon right you've lost right you've already blown your price point out of the water finally you should have a flexibility to change and update your physical network architecture you know from one data center to another as you know trends and such change without that impacting the abstractions you expose to workloads right your API shouldn't have to change just because you decided to you know take L3 from the top you know from your egg layer to your top of rack so this is a this is a Brad picture you can tell so I don't know if you guys are familiar with with the idea of a fat tree network or a class network but one of the really neat things about MVP is that it lets you have any physical network topology you want because all it cares about is L3 connectivity between the hypervisors so that means what you can do is you can actually run your layer three down at your top of rack your leaf switch as they're labeled in these pictures and that lets you take advantage of really good multi-pathing because we all know there are issues with L2 and spanning tree so the best thing to do is to make your L2 domains for bandwidth at least to make them as small as possible and so this is the type of design you know you can go to but the neat thing about MVP is it doesn't require a design like this maybe you start out in your data center with with you know your standard you know L3 at your egg layer and then your next web of your data center goes like this your tenants don't need to care right because it's just IP connectivity to the network virtualization platform okay so moving up the stack we'll start to dive into what the hypervisor layer in particular open v-switch looks like so most of you are probably familiar with open v-switch at this point it's an open source virtual switch it was started with a code contribution from nice here or back when we're kind of building the first open flow implementation back in the day it's been uploaded into Linux kernel and it's a major building bot for actually a majority of the quantum plugins today so this kind of confuses people people tend to have a bit of a misconception about open v switch well show some of the kind of the code and process details and not the code details the binary details in the next page but one thing that it's a common point of confusion is people think of open v-switch like a hardware switch which has a single feature set and anyone who uses it gets the same feature set really the way to think about open v-switch is that it's a it's a software engine for doing generic flow lookups and tunneling of network traffic and so what really matters is how are you kind of programming that engine in those flow lookup tables you know you can do everything from really simple which is just basically tell open v-switch to just do dumb l2 learning right and then it'll work just like a Linux bridge right or you know you can make it very complex and do l2 l3 forwarding accoling quality of service all this other stuff so think of open v-switch models like a Swiss Army knife that you can use to build really cool things and that's why even though there are a bunch of different plugins based on open v-switch they all have different capabilities and advantages and disadvantages so here's here's some of the more technical details so open v-switch is both user space and kernel components so what you're looking at is like that blue box is a single Linux host that's two necks each one and it's only one it's those connect to a management network that's where MVP controller would be and there are two processes in user space OBS DB server no BS v-switch D that phone home to the controller one is using the open flow protocol that's for kind of low level flow programming and the other is using something called the OBS database protocol that's for higher level kind of configuration state things like tunnels etc and then open v-switch has a really small embedded database where it stores persistent data in case your hypervisor reboots and stuff like that and then the two blocks on the on the right our actual open v-switch bridges so this is what you know actually does traffic forwarding so BR int is is a calm name commonly used for what we call the integration bridge this is where all of your VMs in a hypervisor plug-in unlike kind of the old model of network of virtual networking where you'd plug you know different necks into bridges specific to particular Ethernet devices with MVP right you could logically rewire the connectivity that VMs can have on-demand so we don't know that we should wire web VM into each the ETH one bridge yet right because we don't know what connectivity you might want to change it to have later so everything plugs into BR int and then MVP based on the logical configuration will dispatch the packets and filter them and prioritize them appropriately and particular a common thing right is that the traffic has to be sent to another hypervisor and with MVP it's often done using L2 and L3 tunneling and we'll cover that in the slide or two but basically what that means is the packets will be filtered and processed and then handed off to the Linux IP stack which will then just use its normal IP address of the hypervisor to send a packet out on the physical network via one of its necks so good I was hoping this was the next slide like I said this isn't my slide deck so really important thing to understand is this L2 and L3 tunneling and why it's important so think for a second about if you want to have this true fully isolated environment where two two different tenants or two copies of something for the same tenant for that matter a prod and and a test dev environment I could be using the exact same set of IPs even the exact same set of max what you need is an encapsulation layer so that that the address is chosen by one tenant would never conflict with the address is chosen by another this is similar to what a hyper what a compute hypervisor would do with something like virtual memory so if you think about it what happens is that the packet that the VM actually sends which is represented by blue is sent out and the IPs and max and IP addresses and MAC addresses of virtual machines that may be on that same you know virtual network what happens then is it's handed off to OVS there's processing and then the hypervisor is going to slap on another header and that that header has the IP addresses of the hypervisors and the MAC addresses of hypervisors and then it sent across the physical network and this is what fundamentally decouples that gives us all that you know location state independence that we talked about in our definition of network virtualization so that's why you know this kind of tunneling is pretty fundamental to the concept of network virtualization it also has some pretty cool side effects which is that like I said before your physical network stays really simple because all it does is connect hypervisors right so you configure it once with the IP addresses of the hypervisors and then you just never touch it again all of the additional state is in the logical layer you never have to when a tenant spins up you know provision additional things in the physical network so this is a bit of an aside and I probably don't have time to do it but I'm going to do it anyway this is a huge point of confusion so people tend to conflate the tunneling protocol and network virtualization tunneling is important to network virtualization but it's just part of the solution what really matters is the logic for how the flows are set up and your control plane for pushing those rules in here's two examples so first off you know GRE you know I often hear people say like oh I use I'm using you know quantum with GRE for you know network virtualization the answer is GRE is just a tunneling format right GRE was around for years before anyone figured out you know how to put it in combination with a programmable switch to be able to provision these kind of tunnels and virtualized networks on demand similarly if you guys have heard of VXLan right VXLan was originally proposed in the primary mechanism for populating flows relied on multicast which meant it was a total pain in the butt to use right so do me a favor when you're thinking about network virtualization it's not just about the protocol right things like MVP can work with whatever protocol you want even multiple protocols simultaneously right we can pick the right protocol based on the properties you want from your virtual network for example if you have a virtual network that spans a WAN segment that's untrusted we can make sure that the tunneling protocol includes IP sacs for security right or let's say that you need to communicate from a virtual network to a physical workload that's on a switch that supports VXLan well then we should probably encapsulate that packet with VXLan because the ASIC in the switch wouldn't understand any other type of tunneling protocol so really think of the tunneling protocol more as a just a tool or a portion of the solution of network virtualization the tunneling protocol itself doesn't really solve any problem okay now on to the cool stuff control plane so MVP controller just some basics so it's x86 software runs on Linux it's you know it's built with a lot of distributed systems knowledge for high availability for scale out we'll talk more about that in a minute it exposes a northbound API to a management system like quantum it uses a self-bound API like we just saw to talk to open v switch and essentially what the controller does is it's always mapping between logical and physical so right there's something there's some logical configuration that was given to it by the API and there's some physical world out there right with VMs actually located on the hypervisors in certain locations and basically the role of the controllers to always push down the flow state so that those two are always in sync and if a VM migrates right we're gonna have to update the flow state to make sure that that's in sync and if someone makes an API call to change the configuration like change of security policy we're gonna have to push down new flows to make sure all that's in sync it sounds really easy you know when described at that level but when you start to look at all the sets of features and all the things you're managing it actually gets extremely complex so one other very very important thing to know about is MVP controllers are never ever ever ever in the data plane right you know we see at large service providers we see individual hosts that send you know upwards of you know tens of thousands of packets a second right if you want to manage a real data center where you can have workloads like that you know good luck getting that up to all go to a centralized controller right the reality is sometimes people will make arguments that well for average case you know it's doable that may be true right but you know you don't want your network just to work in the average case so this is you can guess you made this slide this is talking a bit more about the controller and particularly it's HA policies and it's scale-out policies so HA and scale-out obviously are very fundamental to what you need in the network control plane so with MVP you can have basically you can cluster right set of hosts they all communicate what they do is they take all the work that they need to do and they slice it up into little chunks and they schedule it to controllers and I don't think they really show it here but for given piece of work you know there's the primary controller and there's a backup controller so if one of those controllers dies the other ones right ready there to take take advantage of it you know we use good distributed system algorithms so there's no chance of split brain there's even really really cool stuff you can do like live software upgrades of your control plane without ever taking the entire system down right because you can take down an individual node upgrade it bring it back up take down right without ever actually losing it kind of looks like a fit like a failover to the system so I mean it's just incredible to see you know upgrading a you know the control plane managing you know tens of thousands of VMS without ever taking it down if you've ever been part of a real network upgrade before you'd understand just kind of how mind blowing that is up and look you can add more nodes we need more capacity in the workloads flow over there and I think one of these is guys gonna die yep someone takes over those workloads okay moving up a little further in the stack this is detail that I'm not gonna go into but the slides will be posted so you'll be able to look at there think of the MVP API is having two big chunks one of it describes the physical world this is the information that the controller needs to know to be able to decide who's gonna tunnel to who and what IP addresses those tunnels use and and all of that right how to do it H8 policies etc the second one is the virtualized abstractions these are essentially what quantum uses to create these logical topologies for tenants one of the things you'll notice here right is that it's not just about creating L2 segments that are better than VLANs right you can create logical L3 you can create security policies port security ACLs quality of service packet statistics port mirroring etc but it's really about giving that full tool set of you know of the of you know that that you know we built up over the years because we understand it's necessary to operate a network you need to have visibility right you need to have counters you need to be able to prioritize it's about being able to provide that whole feature set and the controller the real magic of the controller is for it to be able to take all of these different logical abstractions look at where all the workloads are anywhere in your data center and be able to calculate the rights that will flow is to push down to everyone so that it's all consistent so moving up one more layer on the stack there's quantum in the quantum API I should be able to describe this one pretty well at least I hope I guess the pressure is high now so if you guys are familiar with open stack which hopefully many of you are right there's some set of clients over there which could be tenant scripts could be the rising gooey could be some other platform that you built on top and just consuming MVP API or quantum APIs and there's different services so there's quantum which is the network service and there's Nova which is the compute service and in open stack things are built so that there are generic logical API's and there are different technology dependent back and then implement those API's so on the network side we might be using the MVP plug-in I didn't actually pick a driver I'll say the KVM driver on Nova so you know and then I'll walk through the basic flow of what happens here so first a tenant might say create me a network network one and I might say boot me a VM on network one and what would happen is that Nova would actually go to quantum and create a port on that network and quantum would actually pass that through request through to the MVP plug-in the MVP plug-in would actually go talk to that MVP controller cluster and use the API I just described to create a port it would get an ID back and it would return that ID to Nova and you'll see why in a second because ultimately Nova is then going to find it's going to schedule to some Nova compute node and it's going to pass that port ID along saying when you create a VNIC for this VM it's this port it's this quantum port ID that it's associated with and the reason that's really important is that if you think about you need to complete this loop right MVP who's managing open V switch needs to be able to understand that when there's a Linux device created on open V switch which which which quantum port is that associated with and which logical network should be on and what security setting should it have and so the fact is when Nova creates a VM and plugs that NIC into OBS it actually passes that port ID and then open V switch essentially reports that state to the controller and the controller says okay I know this VM there's already a port and MVP created for it it's supposed to have this security policy this quality of service etc okay so now we moved all the way up the stack and I'm just gonna hit a couple things on the side L2 L3 gateways actually a really complicated topic that probably could have an entire talk of its own so I'm just gonna kind of touch touch base so gateways at a high level so we talked about how we use tunneling between different OBS devices which is great when everything you want to talk to is connected to an OBS device what happens when that's not true like you want to talk to the internet right you need a gateway between your virtualized network world and your physical network world and there's lots of really interesting if you're a nerd you know things about how those two things interface and so we're just gonna talk about two of the models today which is there's a layer 2 model which is basically Ethernet model you know which you can interface and there's a layer 3 model basically a routed a model where you can interface so first about the layer 2 so layer 2 gateways essentially let you take physical workloads or even you know a VLAN and a remote customer premises right and connect it up at layer 2 to a logical network in a cloud so if I'm a service provider right and I want you know my tenants be able to spin up VMs that you know VMs that appear to be showing up on their you know own customer premises at layer 2 I can use an L2 gateway now layer 2 means there's actually broadcast for example sent between these two these web VMs could be using a DHCP server located in the customer premise center now obviously this does nothing to change the speed of light but you know assuming that your your proximity is good enough you could choose to go with the L3 L2 gateway approach and do that so here we have a pretty simple switch which is one logical switch with two web VMs and then it's uplinked to a VLAN that's in a customer premises right with two database servers on it so that's the virtualized view nice and simple right yeah what's the what's the number tries you remember this is what the controller sets up automatically so you don't need to worry about it but I'm just showing this to kind of help you understand what the controller is doing so first off you can see both the gateways and the hypervisors are phoned into the MVP controller that's how the MVP controller is understanding what's plugged into them is pushing flows down to them is monitoring status etc right and you can see that there are tunnels set up between when the hypervisors and something called a service node which I'll talk about in a little bit and then eventually to the L2 gateway so what I'm showing you here is actually the most complicated case right there's the simpler case is that the hypervisors can just directly tunnel to the L2 gateway that's what would happen if it's like a service provider with physical hosting and cloud hosting in the same environment but in this case we can actually do multi-hop kind of tunneling to reach a remote customer premises that's on the other side of the WAN and we can even be smart enough for example as this shows to use unencrypted more efficient tunneling within the data center but then for over going over the WAN using secure tunneling which of course consumes more CPU but it's worth it because you don't know who might be sniffing over the WAN so hopefully that makes sense another thing with L3 gateways so L3 gateways are basically a way to interface I probably should have included a logical diagram here but it's pretty close to what we showed in that first picture where the way you uplink to the internet in that three-tier topology that we had was by a router and that actually gives us some additional flexibility in terms of how efficient we can make the gateways so kind of it's all the same to you it's probably a good idea to go with the L3 gateway couple really cool things about the L3 gateway are how it works with HA and scale out so HA means you know if one of these gateway nodes dies we need to make sure that you know nearly instantaneously that traffic is rerouted to another one so that the flow continues and more importantly that any of the state for example NAT connection tracking state is actually transferred from the active gateway to the backup gateway so you can see here I don't think I'm not sure these numbers all match up but you get the idea that you can have an active router that's on one of the gateway nodes and then it's backup or you can specify the number of backups you want but it's backup will be on one or or several other gateway nodes and you can even split them up into failure zones so you can say right let's say that that you know you want your gateways to be resilient to the upstream switch failing so what you do is you actually have them uplink through different upstream switches and you have two different clusters of L3 gateways and so what MVP will do then is make sure and I'm trying to scan this looks like looks like this is roughly followed here but I'm not totally sure that that you're active in your backups will always be in different failure zones so that you know one of the one of the switches being one of the physical switches that you're uplink to getting knocked out you'd fail over to an active immediately or to a backup immediately another really important thing is that failover happens based on data plane probing so it's one thing to say like oh you know I periodically send hellos back to a control plane server and if I don't get five hellos in a row after 20 seconds I'm gonna fail over to another guy you know this is actually done using data plane probing from the individual hypervisor so the hypervisors they detect that a gateway is down it's actually able to fail over very very quickly to an active gateway this one I'm not gonna not gonna go into all the detail here this is the same slide that I showed you before about the physical fabric but I wanted to highlight one more thing which I didn't talk about last time which is in the bottom right corner there you can see that there are kind of you can imagine kind of a special set of pods where you tend to put your L2 or L3 gateways because remember within within most of your data center the only IPs that you really need to think about routing are the IP addresses of hypervisors and your gateways and your controllers right just the infrastructure IPs so but then you need to make sure for example that if your L3 gateways are used to uplink to the WAN right you need to route your WAN connectivity to a particular cab or set of cabs in your physical data center so that's what this represents it also shows the you know things like the MVP controller and the service nodes and open stack being deployed in infrastructure cabs again these are more kind of you know to give you an idea of how most people would deploy it these aren't you know really strict requirements this is too complicated I'm not gonna say all right the last thing I want to talk about or second to last thing is service nodes so service nodes we saw them before being used to reach that remote L2 gateway primary use case is is for broadcast and multicast replication so we can have the source hypervisor replicate a broadcast and multicast right that has to be sent from one hypervisor to every other VM that you know every other hypervisor has a VM on that same network there's also these set of service nodes that you can use to kind of offload that work from the hypervisor so it's not consuming CPU on the hypervisor you know you can pick which model makes sense for you has a lot of that same kind of HA on fast detection of failure using data plane probing stuff of the L3 gateway so there's there's in fact it's a kind of a simpler case in the L3 gateway so because you actually can just replicate the state to all the service nodes as opposed to having to have active and backups so but again this is just the the general theme right is that anything in the data plane or the control plane needs to have scale out and really good HA properties the last thing that you can't forget right is the management and operator tools so a couple cool cool things to point out here is you know like said we do tunnel monitoring so you so you as a operator can tell whether you know your physical network is giving you any connectivity problems there's even something really cool called the port-to-port troubleshooting tool which is it if two VMs you know for tenant calls and says I've got these two VMs and they can't talk you can actually just enter them in to a page and say show me the connectivity between these guys and they will show you what hypervisor they're on you know what IP addresses the tunnels are using what type of encapsulation they're using whether the tunnel monitoring is up all of that and you can even use a tool called trace flow which is shown here to actually actively inject traffic in that path between those two VMs and kind of confirm that your physical network is or is not forwarding that traffic so I like both of these things because they're really great examples of things that you're like wow I can't even never even want to thought about being able to do something like that until you kind of have this network virtualization framework build up and you have the infrastructure another really cool thing that I mentioned is around upgrade right because we have this is built as a distributed system can fully automate the new deployment of new versions we can verify compatibility all that we can roll back and as I mentioned earlier you can do online upgrade right so that you don't have to really take your whole system down from a provisioning perspective to to be able to upgrade the software which is another cool kind of tie-in to the fact that we're able to release you know four times a year so it means not only do we release software frequently but you're actually able to consume it frequently so the last thing I wanted to make before wrapping up was just I get a lot of probably the biggest misconception about MVP out there is that it's just about scale and I think that's because a lot of other people are aware of our larger scale deployments there they kind of I hear them saying you know oh well yeah I'll do that once I get big but I'm not sure there's any benefit in the short term I just want to call out a couple very important things so first off data playing performance we talked about the tunneling mechanisms actually cut out a slide on something called STT but this is a really hyper-optimized tunneling mechanism that we have that's able to take advantage of TCP segmentation offload so gives you really really good forwarding performance compared to something like GRE you know you probably this is a theme you probably got but right fast reliable high availability both in the data plane and the control plane right your network is something that always needs to work it's a core piece of your infrastructure you know the rich logical network capabilities it's not just about creating VLANs that construct across two physical layer two segments it's about being able to do security statistics quality of service it's about being able to onboard customers or bring in physical workloads into logical networks and like I just showed it's a lot about operator tools and being able to actually operationalize this and be able to do things like upgrade and troubleshooting in a very you know efficient and effective way so with that I'm happy to take some questions and also just encourage you to check out some of our other sessions that we have then in particular Martin Casado I kind of focus more on the technical side here on Wednesday he's used the CTO and founder of NYSERA he'll actually be talking about more the kind of the customer side of things you know what is he seeing out there this guy travels you know all around the world I don't think he's been home in a month literally and so he you know he's gonna be talking more from the customer side so it's a really good kind of compliment to this session which is more focused on the on the technical side so that's it thanks