 Thanks everyone for coming My name is Andre Pesh. I'm a director of software engineering at Arista Networks And this talk is about running open-stack on top of a VXLan fabric Come on in so this talk really grows out of Conversations I've had with customers over the past six months About you know how to how to run open-stack over VXLan If you don't know about Arista Networks we're a day-center networking company our infrastructure is at the heart of some of the largest cloud service providers web companies large financial institutions and enterprises of the world and you know a lot of our customers are interested in Open-stack as an infrastructure on top of which to build their business And they're also interested in VXLan and kind of network virtualization in general and the flexibility that That can bring apparently I started too early come on in guys So interested in VXLan and the and network virtualization in general the flexibility that can bring as part of automating and scaling their infrastructure and So you know as co-authors of the VXLan spec and as a vendor with the first shipping hardware Hardware switch with VXLan support we're often asked kind of how do you know How do I run VXLan on top of or how do I run open-stack on top of VXLan? And so that's really what this talk is about and this has evolved a lot over the past six months in terms of You know what solutions are available out there in the marketplace and so really what I want to talk about is what you know What can you do today? What are your options? and And what are some areas which still need improvement with the neutron and kind of open-stack and VXLan in general that can kind of be food for thought for the future so That the kind of overview here is I want to talk a little bit about VXLan give a quick refresher for those who aren't familiar How many of you are familiar with VXLan show of hands? All right, there we go And and you know obviously talk a little bit about why why VXLan might matter to you and why you might consider running it within your network And then I want to go through kind of what are the network design requirements? You know, what do you need to think about if you're gonna run open-stack on top of VXLan? What are some of the decision points you have to make when running running VXLan at open-stack on top because there are some some trade-offs that you can make in terms of what you can do and You know finally after going through that and some of the decisions talk about some of the designs that exist today and think about the future So a couple random thoughts here one. I don't really want to just talk for 40 minutes If you have questions raise your hand interrupt me in the middle. That's fine I'll try to leave time at the end to talk about things And I guess you know as a vendor the examples taken here to talk about solutions that Orista provides and with with partners, but I think really that the trade-offs and the options here are applicable regardless of kind of what's what solution you end up choosing and so I hope is generally applicable there So I guess the other thing to say is you know VXLan is as a technology is kind of more generally applicable than just running You know running a overlays or tunnels within your your data center, but that's what I'm going to focus on today I'm not going to talk about VXLan for a day center Interconnect or kind of how to use it for you know private public cloud connections stuff like that So let's start with it a quick refresher on VXLan for those of you who aren't aren't intimately familiar and Hopefully lay out some some terms so that you actually know what I'm talking about when we go here So VXLan is really just a standardized overlay technology for carrying over encapsulating layer 2 traffic on top of an IP frame fabric At you know VXLan networks are kind of terminated by VTAPs or virtual tunnel endpoints So these are the points in your network that kind of make up the edges of your VXLan tunnels and do the encapsulation and Decapsulation of your traffic into and out of VXLan tunnels kind of at the edge of your IP fabric A VXLan is identified by a VNI or a virtual network interface So here we've got VNI 5000 And so what happens here is if host1 is trying to talk to host2 It's gonna send an ethernet packet to host2's MAC address. It's gonna hit VTAP A VTAP A is gonna look up the fact that host2 is accessible over VTAP B It's gonna encapsulate it slap on a VXLan header. It's gonna send it over the IP fabric to VTAP B VTAP P decapsulates the packet sends on the original packet to Host2 who then receives it not knowing that there was any kind of Encapsulation going on in between this pre-standard for for network virtualization general not just VXLan Now you know oftentimes it's easy to get confused about VXLan as something kind of new and wondering how traffic flows work and I think one easy way to Kind of dig yourself out of it is to think about how standard layer to you know Dolan queue VLANs work and then kind of apply that to the excellent and so Obviously, this is a simplified picture, which is missing two basic things of layer two networks, which is learning and flooding So, you know, how does Mac learning work? There are kind of two options one is to learn packets over the tunnel So the basic idea here is you need to know the IP address of the VTAP behind which a certain MAC address lives And so if you receive a packet over the tunnel and you decapsulate it You have the IP address of the sender you can look at the inner packet and see the MAC address And now you know when you're trying to reach that MAC address What IP address you need to send your VXLan encapsulated packet to so that's that's one option for Mac learning The other option is to use a protocol or a controller of some sort to distribute You know kind of pre-distribute MAC addresses within your within the VTAPs And this is particularly useful and in a you know an open stack or a virtual environment where you you you've placed your VMs explicitly you kind of know where they live and you know what virtual switches they're behind It's you have some options for for distributing the MAC addresses in a kind of better way and avoid some of the learning and flooding that can happen there But you still need to be able to to flood packets and generally for this is for bum traffic or broadcast Unknown Unicast or multicast traffic you need a way to be able to send traffic out and have it flood to all the different hosts that are part of that logical L2 network and To do that they're kind of two different options One is to use IP multicast. This is kind of a standard protocol based way for You know if you get a packet you don't know Who to send it to every V&I has an IP multicast group associated with it all the VTAPs that care about that V&I Join that multicast group and so you encapsulate the packet send to the multicast group It then distributes it kind of an efficient hardware way across your across your network But the other option is is to do something called head in replication This is often combined with a Replication node that's kind of purpose built for this and the idea here is instead of multicast You just send a unicast packet to every VTAP that cares about that V&I and you know It's kind of left there needs to be an outside mechanism for for knowing what that full list list of VTAPs is So that's that's the quick kind of refresher on on VXLan if you guys want to come up There are some seats up here feel free in the middle. Don't be shy So that's kind of the refresher on VXLan And and the obvious question is like why do I care right? Why would I want to run VXLan on my network? And you know, there are a couple different reasons for this The most obvious one and kind of interesting in the context of OpenStack If you're a service provider is that you get past this limitation of 4,000 VLAN IDs for tenant networks So for V&I's you can go up to 16 million IDs which means that you can build On a given cloud you can handle more and more tenants if that's your business that's important But even if you're not someone who who has you know 16 million different Networks that they need to logically separate it later to VXLan still has a lot of great benefits One of them is that it kind of solves the MAC address scaling problem at the core of your network so today in a standard layer 2 network your core needs to know the MAC address of every virtual machine in your entire cloud, right and That means that kind of the hardware requirements and therefore the cost of your core network scales as you want to build your cloud and Get bigger and use eventually hit a cliff With VXLan you've actually encapsulated all that traffic over layer 3 And so you only have to learn the MAC addresses of the VTEPs in your in your in your In your environment, which is generally several orders of magnitude less In general layer 3 fabrics are much more scalable than their two networks. You can use ECMP, you know 4, 8, 16, 64 way ECMP build enormous Layer 3 fabrics on top of which you can run your open stack cloud Unlike layer 2 where in general you're limited to two kind of what we call MLag or virtual chassis type aggregation at your at the core of your network and You know, I think one important aspect to remember here with VXLan compared to other kind of network virtualization solutions We have two things one is that you only need support at the edge of your network So you're not this isn't a rip and replace that you need to go and put VXLan support throughout your entire network You can use the equipment you've already bought You just need VXLan support at the edge and The second is that VXLan can actually be supported in a network asics and this will get to why that's important next In fact, we'll get to that right now So what are some real-world requirements for deploying open stack over VXLan? So you need VTEPs that do VXLan encapsulation to encapsulation but You need, you know speaking of VXLan and the different options we talked about before I'm sure that some people out here who heard me say IP multicast as an option for distributing bum traffic Wanted to scream and you know run out, but people want to deploy VXLan without IP multicast support It's kind of unfortunate at some level like I find it sad like IP multicast is great It's you know, it's very efficient. It's an standardized protocol, but people don't want to have to run in their network They don't run it today. They may not have the option to run it because they don't control that the whole infrastructure on top of which They're running and so we need a solution which doesn't require IP and multicast and The second is hardware VXLan gateways You know, this doesn't mean they all have to be hardware VXLan gateways But you need the kind of performance and density of physical gateways at the places in your network where it matters And generally this happens in two different places One is kind of at the north-south gateway of your of your Cloud now, how do you get into and out of your your kind of VXLan based cloud environment? You want to be able to do that at kind of densities and and performance that isn't a bottleneck for your for your cloud And the second is when you have physical infrastructure, whether it's non-virtualized servers firewalls and load balancers Storage that don't have VXLan support But what you want to be logically layer-to connected to your your VMs within your environment Now I'm from a hardware vendor So I imagine that there's some skepticism here potentially But so I want to go into this a little bit more why this is important So so maybe who who wants to venture a guess, you know Like how much throughput you think you can get or density you can get through a software VXLan base gateway I'll be optimistic and say two to four Ports of 10 to 20 gigs and maybe this increases over time But if you look at what the hardware can do you're talking about, you know You know our 71 50 series does 64 ports of 10 gig are newly announced 70 50 x and 70 to 50 x can do 256 10-gib ports and two are you right like that you're talking about massively different scales and You know if you're if you're talking about storage or your your firewall or your load balancer You actually need that performance or if you're you know if you're a hosting provider and you have a ton of physical infrastructure that you've been selling to customers for the past 20 years and you now want to build a cloud service and you want to be able to sell that to your customers But still provide layer-to-connectivity between them You need some way to be able to bridge the gap so that you know They when they want to you know that next piece of infrastructure You can actually sell them a VM in the cloud but have it behave as if they were directly connected And and so that's where hardware VXLan gateways really come in and become a requirement for truly building Open stack on top of the X-Lan at scale and production so given all that I Want to talk about some kind of Well, I'll call key design decisions, right? There's no real right answer to these It's just these are options that you have as you look at building open stack on top of the X-Lan And we'll kind of go through each of these so the first is is kind of the choice between software and hardware v-teps and And No, obviously software v-teps Have the ability to you know are are limitless if I can call it that we're limited only by RAM and CPU Which generally is easily, you know increased if if need be And so that's great And especially given that the X-Lan part of what it was trying what part of what it is trying to solve is Kind of the problem that you hit these kind of hardware limitations of scale at the core of your network For example in your MAC address table and you want to be able to to kind of build an initial network and scale it without having to Have these points we end up having to redesign everything. So this is an important piece But I think it's also important to think about The fact that that kind of flexibility isn't free You know, there's kind of an overhead of 10 to 30 percent of Doing the encapsulation and the decapsulation in your software v-tap If you're a cloud provider and what you're fundamentally making money on is selling VMs to customers that might be really important And so I think there's a trade-off there between kind of this the flexibility of software And and hardware which can do you know greater densities better performance But obviously is limited by hardware table size so you have to have to worry a little bit more of does a specific hardware you know kind of meet the needs as you look to grow and You know the nice thing about hardware v-taps is their cost is kind of You know constant regardless of whether you use them for VX land or just use them as the IP fabric Because the power consumption is all the same. And so you kind of have the option of doing it doing either It's really just a question of you know, what what what is your network need? And what sort of flexibility you need so that that's one decision The other interesting point about software and hardware v-taps is you know at the end of the day you need to manage your network and You know, I think this is different at every company but there's kind of your your physical networking team that's managing that the networking infrastructure and there's the you know, let's call it your your cloud or compute team that's kind of managing open-stack resources and and and your servers and You know virtual switches fall on a funny middle ground where Your networking team does need to have the visibility and the tools to debug your network in the same way as it did before before overlays, right and so One you know part of this is a requirement placed on on virtual switches and and kind of the ability to do things like mirroring and S-flow just like you would in on the physical switch for your networking team It also comes down to visibility at the physical network and and whether you can kind of match You know when you can take actions on the traffic that's potentially encapsulated and do the sorts of things you need to do by Looking at the inner header and and one advantage of hardware v-taps is you can kind of Before encapsulation do all the things that you you did before For your networking team and so again depending on the tools you have and how you're trying to Integrate into your existing environment This is an important thing to consider. So that's that's kind of the software versus hardware v-tap design decision, right? So You know another one which you know again, I'm not sure that there's really a right answer here I think it's just more that when you look at solutions they generally come down to having a replication node or Having every v-tap doing head-end replication So replication node is kind of a purpose-built Server generally that's whose only purpose is to replicate bum traffic So if you don't know where to send a packet because you don't know, you know, it's multicast It's unknown you can cast whatever you send it to this replication node And it is responsible for then sending a copy to every v-tap that that cares and so you kind of purpose-built this thing You can give it the right resources you can scale it out by kind of doing an ECMP like a Spreading out of flows on to different replication nodes And that's great, but then you now have replication nodes to manage you have to deal with HA You have to make sure that when one fails, you know another one takes over So there's there's a management cost there and generally that's hidden from you by a controller and Then the kind of head-end replication at each v-tap is well It's nice because it's I don't know if it's by definition ha, but or it just doesn't require any ha But obviously that's then putting the burden and the cost of replication on each of your v-taps Which if they're in software on your compute node is maybe something you don't want to do because you want to run VMs on your compute node so it's another kind of design decision there and Then the last one is is kind of this question of whether you use an external SDN controller or You know for lack of a better term. I'm calling the standalone neutron, right? So This is a hard trade-off to quantify This really depends on what are you trying to do? What's your business and fundamentally? I think this comes down to a trade-off of kind of features versus cost So again depending on your feature requirements depending on you know what you need out of your network You may decide to choose a particular SDN controller or you may decide to kind of use standard Neutron with with the OBS plug-in let's say or the ml2 plug and I should say now And this is obviously this isn't something the trade-offs vary a lot here depending on what you choose But but those again, this is kind of the last the last main design decision here So with that what I'd like to do is talk about three different designs for obisac over vxlan and You know kind of look at what what did the networks actually look like you know What are the trade-offs? Like how can you actually do this today? And where where do things fall short? Before I do that are there any questions? No, I said I didn't want to talk for 40 minutes. There we go. Sorry. Yes Well, I think we can look at the different the different designs that we're going to come up here I don't think that I'm not going to point to one of these and say that's the best option Right. I think it really comes down to a lot of different things. You know the questions that I generally ask are well, how many tenant networks do you need and You know, what are what are your scale requirements in terms of how many? What type of traffic you're handling and you have a lot of multi-cast, you know I think all of these come into play when you're when you're making a decision And like what's your business? What's important to you? All these come into play and I think that's where understanding the different Reasons why one decision or another you know that neither is bad. They just have different implications. I think it's important Yeah, yeah, so OBS today and the The ML to plug-in using OBS supports the excellent the the version I mean, you know the NSX solution is on top of OBS. I think there are many controller solutions that make use of OBS as well The standard the excellent solution with OBS is Effectively the same as gerry tunneling. It's just a different kind of encapsulation per call. You learn over the tunnel There are some folks who have changed this with the L2 Population driver, but that's kind of the default mode Yeah, so that's your question so I'll get to that Really good question. Yeah. All right. Yeah. Oh, there we go. Yeah, I can repeat the question too. Sorry so on the With the software VTAP if we're gonna do a full match What will be a scale a scale limitation as well as the performance penalty on that? So I don't I don't think I have numbers I can tell you right now I think that most people agree. This was brought up in the neutron design summit yesterday They're like OBS with VX land kind of deep by default right now. It's not really deployable Without kind of use of an SDN controller. I think that's kind of the unfortunate truth But this is getting improved. So I think you know for for controller based scalability I think some of that depends on OBS But I think it's mostly, you know, what's been tested to you by that controller solution. So that's It depends by my controller. All right. Thank you So yeah, so let's go let's go into some of these designs and my my nifty PowerPoint You know drawings so So the first one I want to talk about is how do I run? Open stack on top of the X land where I'm using an external SDN controller I'm using software vTEPs on all my control nodes, but for the points where I actually do have performance requirements I'm making use of hardware vTEPs. So, you know, specifically in this picture. I've got you know, my core switches I've got top-of-rack switches. I've got some cute nodes a b and c I have some physical infrastructure that's connected here to a to a You know to a gateway that's serving as a vTEP And then I have vTEPs at the at the kind of north-south boundary to be able to terminate the X land tunnels And obviously this can this isn't a nice drawing you can kind of mix and match where you have physical infrastructure It doesn't have to be a rack of physical infrastructure in a rack of compute nodes, although that's fairly common and so what I'm going to talk about here is is kind of you know, what one of these solutions look like and In the context of Aristo, we've partnered with the MRSX as well as as Plumgrid to provide integration with the SDN controller With our hardware gateways So the way that you that this looks from a kind of neutron perspective is neutron is running the you know, your controller's plug-in Generally that plug-in is kind of pushing all the information to the controller whether it's NSX or Plumgrid or What have you and the SDN controller is then in charge of managing the virtual switches And if you take out kind of physical hardware gateways like that's that's what it does it manages the virtual switches but now you want to be able to And part of doing that is it actually you know pre-populates You know VXI and MAC tables because everything behind those compute nodes or almost everything is actually a VM So you know what it is. It was placed there. You were told about it. So you can do this efficient kind of Provisioning of MAC address tables across all your VTEPs and really limit the scaling and in these solutions Generally, you have a replication node that's handling handling all of your flooding to kind of optimize that But now you want to get physical infrastructure into the picture, right? And so there are kind of two things that are important there one is how is that physical infrastructure provisioned, right? Like tenant networks have VNI's they're chosen by the controller You know, you don't really know them. You don't know them up front. You can't just statically provision this And then how do you how do the the virtual switches and the physical switches that are playing this VTEP role? How do they share information about connectivity, right? How does how does compute node a VM green know that there's some physical infrastructure with a given MAC address off of The VTEP that's off of tour for so the way this works is that You know, you have your physical infrastructure. You've kind of provisioned that up to the The gateway so you've given you set up your storage with VLANs You've kind of configured all of that and then what you want to do is is ask the controller or neutron to map You know for a given port on a gateway for a given VLAN tag potentially what tenant network does that become part of and Then by integrating that you know integrating that By having the physical infrastructure integrate with the sN controller the sN controller can then push that information to us and tell us Okay, you need to put the VLAN 5 from this port into VNI 5000 And so that's you know, that's kind of how this the initial provisioning of the VTEP happens but then you need to know about reachability information and So this is where you can actually share the information that the physical infrastructure learns about what MAC addresses are there Right just standard learning when when someone speaks, you know, it's there The sN controller knows about where everything is in the virtual environment By sharing that information you can now have every virtual switch and every physical switch know Where every MAC address in the system lives that it cares about and so this is how you're able to get a Multicastless the excellent environment with an sN controller And it's all quickly about kind of how a packet path looks like, you know If VM green is sending a packet the VTEP here, which let's say is OBS and capsulates Well, so in the first time you speak let's say you're speaking to physical infrastructure You don't know about it hits that VTEP. It doesn't know where that MAC address lives Sends that to the replication node the replication node sends that to all the VTEPs that are part of that VNI Including top rack for top rock for decaps that floods it locally the physical infrastructure Now has received this packet and responds and because we've pre-pervisioned where VM Green lives on the on the physical infrastructure. The response just hits the VTEP Which then encapsulates it back to the green VM so you have this kind of end-to-end And then communication and in the response We're then able to tell You know and a sex let's say that that's where that physical infrastructure lives And so on any other virtual switch needs to talk to it knows exactly what VTEP to send it. That's that's kind of Open stack with an extra lnc n controller kind of mix of software and hardware VTEPs Any any questions on that? Yeah, sure. Well, so I think that's an interesting question, right? So right now what I've described is Virtual switches that are VTEPs and gateways that are that are physical switches with non-virtualized environment behind I think to do what you're describing it. It's logically possible. It needs explicit support from the controller This isn't something generally that that we've done at this point. Oh I'm sorry. Yeah, so the question was if we talk about compute node a That VTEP could be in top of rack one, right? You could have kind of a more general mix of physical and virtual VTEPs and and I think that's right That is kind of logically possible, but it's not something that well. It's something that the controller has to explicitly support Yeah, oh, that's exactly what happens, right? So that when you set this up the first thing you do is you can figure the switch to say by the way there's this controller over there you can figure the controller similar is configured with the gateway and And then everything else happens automatically based on the provisioning that comes through neutron. Yes, right I think in general VTEPs don't really care about whether they're physical or or virtual I think there's there's a question of whether you try to optimize things to say, you know Is there dynamic stuff that can show up behind that host? But yeah in general the excellent of VTEPs there are more general concept that don't care the actions are all the same That's right. So Yeah, does it need to be encapsulated to transport VLANs. I didn't quite follow the question. Sorry No, so it's actually so that's a good question I drew this but forgot to say it if you look at where like the L2 L3 boundary is You're routed from each of the compute nodes kind of up, right? Like there's there's no there's no trunk port there But then for your gateway you actually the kind of L2 L3 boundary is you know for the underlay We're talking about here is at that gateway So you do have a trunk port down potentially carrying multiple VLANs But there's no there's no VLANs in in the picture without way outside of the gateway So I went through this part. So the next use case I want to talk about is How how would I run open-stock on top of the excellent environment where I do all hardware gateways and I want to take this opportunity to make a quick plug for ML2. So ML2 is a new plug-in in Havana It you know, it's it's replacing or it's not replacing yet It's deprecating as the new version of what kind of the OBS and Linux bridge plug-in were This was an effort across many companies within within the community to try to really improve You know improve the ability to have multiple different technologies that interact multiple different vendors that kind of coexist within the same plug-in And and to provide some API's for them to do that So the ML2 plug-in to me It comes down to a couple things one is the separation of the state of your tenant networks from how that state then gets realized Across your across your infrastructure, right? So I have tenant networks. They have names. They have IDs They have different segments And and that you know, that's kind of a concept that happens that needs to happen Regardless of how it's then implemented or realized and then how that gets realized depends on What you actually have right? What vendor do you have in your physical network? What technology are you using? What virtual switches are you using and and the goal here is to be able to kind of have we're called mechanism drivers that can Realize that state across different physical or virtual pieces of your infrastructure and to be able to swap them out If you need to without kind of having to change everything So it's really trying to get away from the the monolithic per You know per solution plugins and provide some more flexibility and choice there and So just to plug this quickly Bob Kukuro and Kyle Messery are our core developers in Neutron They're giving a talk on ML 2 tomorrow at 11 a.m If you don't know about mo2, I would definitely go by and check it out Especially if you're using Linux bridge over yesterday. It should be good talk Sure, no taking pictures. All right, I think these will all be posted later by the way, so We go so so what does this look like right? So if I wanted to do standalone Neutron with all hardware VTAPs What would this mean? So, you know, this would be for an environment where you know, you really care about the performance Performance loss on your on your computer knows what you know, what would you do? So your your layer two or layer three boundary now kind of moves to the top rack switches You have VTAPs across the top rack switches You have OBS running all as your let's say as your virtual switch And you know the connection now between OBS and your top rack switch is just standard all-in-q tags So I'd like to talk about is A solution first kind of a simple version of this And actually I should say so at the last open-sac summit we talked about some changes We had made to be able to automatically provision VLANs across your physical infrastructure Based on configuration that came in through Neutron and we since moved that in ML 2 so the way this looks like is you have Neutron During the ML 2 plug-in you have the OBS mechanism driver effectively managing all the virtual switches You know assigning VLAN based on the VLAN that's assigned it's pushing that appropriately to the to the virtual switch and then the Aristomechanism drivers taking that information and pushing it to the physical infrastructure so that the physical infrastructure can build a map of tenant networks VMs that have ports on those 10 networks what IDs they were placed on and what compute node that VM was placed on and So you have kind of this full view of the the virtual You know virtual networks along with these compute nodes there obviously physical resources And then on the physical side what we do is we use LDP and we build a full map of your physical network topology So we know kind of your core switches are connected to top-arack switches are connected compute nodes And as a result we can then match the two together And so we now know for a given top-arack switch that its port is connected to a compute node That that compute node has the following VMs on it that those VMs are part of the following tenant networks And have been given VLAN IDs 1 2 3 4 and therefore that you have to assign those VLANs on that port And so you know what we talked about last time is kind of simple layer-to-environment How do I get my VLANs automatically trunked? As opposed to kind of the standard thing before you trunk all VLANs everywhere and have you know massive Bridging domains So now you know a simple a simple way to think about how you would do all hardware VTAPs you take that one step further you say okay Well, I've got a solution which trunks all of my ports to my top-arack switches All I need to do is now map those VLANs into VNIs at each top-arack switch in a consistent way So if you imagine just saying look for every VLAN say VLAN 5 is coming from the green VM It maps to VNI 5000 or 5000 5 And it is then goes over the layer 3 fabric And goes to the VTAP where the physical infrastructure is is located And then gets sent down so you have this connectivity where everyone thinks they're VLAN connected Although you're able to take advantage of a layer 3 fabric and the advantages that it gives you with MAC address scaling and Kind of general scalability and fault tolerance now in this solution as described We use head-end replication at each top-arack switch as opposed to a replication node So that Whenever a packet comes up you don't know where it's supposed to go you actually go and send it to every other VTAP And then you learn over the tunnel and you're able to build your Mac your your kind of the excellent tables in that way Yep So so this that so the question is what happens when you get over 4,000 VLANs And so what I just described is the simple version of saying look I only need 4,000 tenant networks I'm not limited by VLANs, but I want to use a layer 3 fabric And so in this world VLANs are allocated, you know consistently across all of your racks And actually this is you know the next point to get to which is okay. Well, that's nice, but I'm a service provider I want to go over 4,000 tenant networks. How do I do this? And so this isn't something we support today in all honesty But it's something that ml2 enables with some of its multi-segment support where you can do rack You know, I think there's some work required here within ml2. I think this is something that that's being pushed by various folks Where you can do rack specific VLAN allocations your VLAN is only locally locally significant within that rack So you can have 4,000 tenant networks within a rack and A VM in a tenant network on one rack may have a different VLAN than a tenant network in another rack So that's where you can kind of get past this 4,000 tenant network Solution and this is something that kind of ml2 enables in some of the infrastructure that it's provided to do it in a more general way Than than what's been there before So yes, that's kind of you know If you want to take advantage of the hardware capabilities of VXLAN You want to not pay the penalty of doing a encapsulation decapsulation at the virtual switch This is a you know, this is a direction you can take to do a hardware hardware VXLAN everywhere You know again back to the questions around trade-offs, right? It's what kind of network you trying to build how many how many tenant networks you have How important is the flexibility of the tables versus what hardware can provide and that's where you can make this trade-off between various solutions Any any questions on on this? Yeah Yeah, so that's that's something that the physical infrastructure manages so that the you know, it's kind of similar to how we how we Build the the network topology map. We we know where all the different physical switches are the properties of the switches Sorry, yeah, so that each switch is manually configured with you know with the fact that it is a VXLAN endpoint and a combination of ML to our wrist a plug in and in the physical infrastructure distributes that permission so now now the question like Okay, let's go one further How can I do standalone neutron? software VTAPS and hardware VTAPS for gateways You know so basically that similar picture to the digital one, but with no no SCN controller like I want to only use the OBS plug-in and There's not much to see on this picture because the truth is this isn't really something you can achieve today You know, this is the other thing that kind of fits the requirements for running open stack on top of the excellent environment, but You know today we talked about earlier OBS on its own doesn't really scale well Without an SCN controller Again, there's work that great work has been done by the gentleman up here With the L2 population mechanism driver to do some of the stuff that an SCN controller would do to pre-populate some of the MAC addresses but fundamentally you need you need a way to be able to kind of take the information learned by the virtual switch and the information learned from over the physical infrastructure and be able to coordinate that over your environment and this is This is a hook that neutron is gonna have to provide or or ML2 is gonna have to provide and so that's that's kind of an interesting area that potentially things things could go in You know, I mean it's kind of interesting question of whether you need this model, right? You know whether whether you say look between having all hardware VTAPS and having SCN controllers Like what is the need that that's filling and and so that's an interesting question here but But yes, this is kind of the third model and and where things kind of Break down if you if you really want to try to do this at scale in production And the other thing and this is kind of true generally You know, I think one of the things that's a little bit missing is a general model for VX on gateways within with a neutron You know the NSX solution has an extension that does this I think it kind of just needs to be generalized so you can take these actions to be able to dynamically add and map physical infrastructure into your virtual environments and You know, some of you may be saying well This is seems kind of similar to ironic and some other efforts are going on within the community And I think there is some overlap there. And so this is an area where I think things will probably evolve over time so I'm gonna leave it there I think we have a few minutes for questions and You know again really appreciate everyone coming and standing in the back And and yeah, any any questions for folks or for me. Yes Yeah, so the question is how about if you have an open source SCN controller And I think the same model as applies in the first slide or the first diagram applies there You know, I think that the thing that's missing right now in terms of well open daylight is not really deployable right now It's part of what I was focusing on I think that is a direction where things could go It would need to have similar things that neutron has today or some of the SCN Other gonna stand controls have which is the bill this ability to kind of map in gateways and have both virtual and physical V-taps, but there's nothing fundamentally, you know, I don't have anything against that, right? It's it's just talking about what could you do today? Yeah, I think similarly, you know, there's the Ryu controller and other other controllers that do In a more open-source way this kind of VXLan orchestration and part of this is that the hardware encapsulate or the hardware piece that's missing Yeah, it's got a might come into you there Where does the VXLan stands with respect to IP version 6 both for a virtual machine and for the hosts themselves? Is it transparent? Yeah, so the VXLan spec itself does not Does not say anything about IPv4 IPv6. I think you know the encapsulation could be sent over either of them Someone else will probably have to answer whether OBS supports this We we personally at ERISA do not support this in our hardware gateways yet But I think as a as a standard there's nothing preventing it But is it true also for the hosts themselves the hypervisor themselves for communicating between the hosts again I think there's nothing preventing it I won't say that it's doable today because I just don't personally know but I think you know I think there's something that's probably easily added at it addable that makes sense. Thank you You know, obviously there are some you know, maybe their scale issues are stemming Certainly more state-to-carry to a larger address and all that sort of stuff, but the other questions. Yes Wait for the mic. Yeah, sorry With regard to the future work on the neutron directly control the V-tap Yeah, it seems that the neutron used to take over some of the functionality of the SDN controller, right to Yeah, and that's I mean, I think that's part of the question that that that is out there Which is like should neutron do this? Yeah, right? So, you know and and you know again, it's not doable today I don't think that it's an insignificant amount of work to get there And I think between the other two solutions you have some good options there and between open-source SDN controller SDN controllers as was mentioned earlier if those evolve to the point where Where they can do this it kind of becomes less important for neutron to do it itself So I certainly agree with that Yeah, so I think we're out of time and I'm gonna have to cut this out off Thank you everyone for coming and feel free to come up and ask questions