 So, hello. My name is Nolan. I'm the co-founder and CTO of Cumulus Networks, and we're...Cumulus is a Debian Linux-based distribution for white box switches or bare metal switches. You can get these from a variety of ODMs, or this box right here is actually from Dell. I have a microphone as well. Okay. My name is Chet Burgess. I'm vice president of engineering at Metacloud. Metacloud is an OpenStack solutions provider. What we do is we help companies design an OpenStack-based cloud. We build it for them. And then we do 24 by 7 maintenance administration monitoring, that sort of stuff. The photo you see up here is Nolan and I go way back. We went to high school together. So this is us 20-some-odd years ago when we were a whole lot younger. Well, at least I had a lot more hair. Nolan still seems to have the same amount. So we're going to go ahead and get started here. We're hopefully going to do a demo towards the end. It's a haven right now, so I'm going to be in the corner working on that. But Nolan is going to go ahead and get us started here. Okay. So as many in the audience are aware, today most OpenStack deployments that don't have some sort of SDN controller are largely based on L2 networks and multi-tenant policies provided by VLANs. Now, there's various problems with that kind of traditional design. Here we have a slide graphically representing it. As you can see, there's kind of the big pair of core switches up at the top and then there's these pairs of aggregation switches down below. Now, there's a lot of complexity that comes out of this right now. Those pairs have to fail over. There has to be some way for them to communicate such that one can take over for the other. And even inside them, because they're so important, there are two supervisor cards that have to communicate and be able to fail over. So now we've got two proprietary protocols that are sharing state and can have things go wrong. They can get out of sync. You know, there's plenty of different failure modes of this. The other problem is up at the top of that core, you don't have as much bandwidth as you have down in the lower tiers. So two servers that are in the same rack can communicate very quickly. But if you have to go across through the core to the other side of the data center to a different rack, suddenly there's a lot less bandwidth available because you're bottlenecking through that core. And so you can work around this to an extent by trying to be extremely clever about where you schedule jobs in the cluster. But sometimes you don't have that option. Maybe you want to have run a job across the entire cluster. So the other traditional things, VLAN limitations, if you want to have more than 4,096 tenants, you're kind of out of luck. And there's more mundane things like just Mac tables have limited size, right? You know, broadcast domains get large, you get a lot of broadcast traffic. There's a lot of problems like that. And then finally, you kind of have an option, right? You can either waste most of your capacity if you build a mesh network because STP, the spanning true protocol is going to block almost all of the ports, all but one. So you paid for 20 ports and only one of them is actually passing traffic. Or, alternately, you can use proprietary vendor extensions like MLag, VPC, et cetera, et cetera to provide that kind of, you know, active, active capability. But now you're back to using these proprietary protocols and that might give said vendor a little bit of leverage over you. So what do we do instead? L3 works, right? We've all used the Internet. It works at incredibly high scale. The protocols may appear complicated, but compared to a lot of these Layer 2 alphabet soup protocols like, you know, BPU, STP, Tril, you know, you have your BPD, GoogleRD. There's a whole bunch of these. That's actually generally ends up being more complicated than something simple like OSPF. And you have ECMP. So now that bottleneck of having, you know, 20 links, most of which are disabled, now you can use all of them all the time. And you don't need these complicated proprietary protocols for failover because, you know, the Internet was designed for failures, right? It's the old line is it was designed for Baltimore to take a direct hit and the Internet would continue to work. So here I have a graphical representation of what we call a leaf spine architecture. Some people call it fat tree. There's a bunch of different names for it. But the interesting property that it has is that each one of those servers can talk to all the other servers into the other racks at full speed as if they're in the same rack. So now there's no difference in locality, except for maybe a tiny difference in the increase in latency. But from a bandwidth perspective, you have full bandwidth. Okay. So we just talked about Pure L3 and why we think it's better for really large capacity clouds, especially if you need to maximize things like east-west bandwidth. So think big data here, think Hadoop, you know, think those sort of workloads. So Pure L3, that's great. But, you know, one of the things we like about our OpenStack deployments is that we like the fact that we can create individual projects. And those projects can both have security groups to provide Layer 3 separation security, and we can use something like VLAN to provide a certain amount of Layer 2 separation. So if we're using a Pure L3 deployment, how do we do that? Obviously you can't do something like trunk VLAN all the way across your L3 ECMP fabric. It just doesn't work. So there's a solution for that, VXLAN, virtual extensible LAN. So this is an IETF draft standard. I put the link up there. If you're really interested in VXLAN, I'd actually highly recommend you read this draft. It's one of the better network standard proposals I've seen. It clearly states the problem, clearly states the solution, clearly states how it works. And so it's very well written, very digestible. So what is VXLAN? Well, it sounds a lot like VLAN. All we did is throw an X in the middle. So it's probably something kind of similar maybe? I don't know. So it's a network overlay. There's a bunch of these out there. And it's really simple. It sounds really complicated, sounds kind of scary. It's actually very simple. It's just a simple encapsulation of Layer 2 frames using a Layer 3 UDP packet. So what does that mean? I just said a bunch of stuff that might be a bit confusing. So we have a diagram here. And on the left side and the right side, they should kind of look familiar because these are two very standard things that we deal with every day. So up there on the blue, we have what just amounts to a standard UDP IP packet. There's nothing special about this. This is like any other UDP packet that you're going to see on your network that your systems are going to use. And then the green on the far end, we have basically what just looks like a standard Ethernet frame. You have a destination MAC, source MAC, an optional A2.1 Q tag if you want, and then whatever the rest of the payload is. And then stuck right in the middle in pink. Yeah, it's coming out as pink. Okay. You have the VXLAN header, which is just a little bit of extra data. And that's really all VXLAN is. It's just that little bit of extra data and then some smarts on either end to do something with it. So, you know, what's in that data? Or what does that data mean? So as we talk a little bit more about VXLAN here for the next few slides, there's two kind of concepts that are part of VXLAN that are pretty important to understand. Number one is what we call the VNI, the VXLAN network identifier. It's a 24-bit number. So you have 16-ish million of these. Whatever that math works out to be, it's a little bit more than 16 million. It's a field in the VXLAN header. It's very similar to a VLAN ID, except for you have a heck of a lot more of them. And it's basically what's used to limit your broadcast domain or put another way, limit your virtualized Layer 2 scope. The other important thing to talk about is a VTAP. So a VTAP is a VXLAN tunnel endpoint. So we said this protocol is an encapsulation protocol. So that means if we're encapsulating something, we have to send it between two points. And those are the tunnels, those are the VTAPs. So then the originator and terminator of that VXLAN tunnel, and you have a pair of VTAPs for every VNI. And you might have multiple VTAPs for every VNI if you have this multiple systems. And this is where on the earlier slide I showed you, this is where the outer destination IP and outer source IP comes into play. Those are your VTAPs. So how do we send a packet using the XLAN? Well, it looks a lot like sending a packet using VLAN or just a straight ethernet interface. We check the ARP table. We try and find a match. We know what IP we're sending to. We're looking for its MAC and interface mapping so that we can know where we need to send this packet. Where in our links kernel does this packet get delivered to you to do something with? Then we're going to look at the Layer 2 forward database. And we're going to check and look, in this case, we're looking for the IP address of the destination VTAP. So we know the MAC. The ARP table told us the MAC. It even told us what interface to look at. Now we're going to look in the Layer 2 forwarding table and say, for this MAC address, do you have a destination? And hopefully we're going to get back an IP address that gives us a VTAP destination. Up here I kind of showed you what this looks like. We've got a VXLAN interface on VNI 1, 2, 3, 4, 5. Raise your hand if that's a combination to your luggage. Anyone? Great. So we've already got this configured here and we have an IP address of 10981. So let's say I want to talk to 10982. I look at my ARP table and it gives me a MAC address and says, hey, that's on VXLAN 1, 2, 3, 4, 5. Great. Now show me the Layer 2 forward database for VXLAN 1, 2, 3, 4, 5. Cool. I have that same MAC address, very first entry, with a destination of 172.163.102, which is a remote VTAP that I want to send this to. So how do we send a packet then? Again, it's pretty simple. The packet's encapsulated. It's delivered to the VXLAN interface. It's encapsulated for the destination VTAP and that's configured for that VNI. So we take that inner basically ethernet frame. We put a VXLAN header on it, which is basically a VNI and a few flags, and then we wrap it in that outer UDP here. And finally, we send it on the wire. The destination receives it. It unencapsulates it because hopefully it knows what it's doing, since it said it did. And then the inner packet is processed by the receiver, whether that's a switch that's going to switch at some place or a Linux box that's going to deliver it maybe to a bridge to go to a VM or something like that. Great. So how do VTAPs handle BUM? Okay. So I will pass it off. BUMs are basically broadcasts, unknown unicasts, or multicast packets. Okay. So to give Chet some time to resolve some minor technical difficulties, I'm going to describe this even though this is mostly his work. So BUM packets are kind of the fly in the ointment of VXLAN. It's all great to have these unicast packets going from the source VTAP to the endpoint VTAP. But there are a couple of exceptions. Broadcast packets, for example, have to go to everyone on that L2 segment. Unknown unicast. If you don't know where the destination MAC is, you have to flood it to all the possible places because that will guarantee that it gets there. And finally, multicast. So there's basically two common solutions to this problem. One, and the one that's kind of talked about in the VXLAN standard, is use multicast. For each virtual network, for each VNI, you allocate a multicast address. And then everyone who's interested in that VNI can register to get that traffic. And now, when I have one of these BUM packets, I just send it to that multicast address and the network takes care of replicating it to all of those different endpoints. Now, the problem with that is that most people, especially in IP routed networks, turn off multicast because there's a wide variety of problems that people have run into in the past. So the other option, and the one that we tend to see a lot more of, is having some sort of service node. And so this is a central or distributed program that runs and every time I have an unknown packet, one of these BUM packets, as a VTEP, I would send it to this service node, unicast. So now we don't need multicast. Now, the service node has an idea of a mapping of, for each VNI, who is involved, all the VTEPs that are involved. So it can then replicate that packet, unicast, to each one of those endpoints. And there's some optimizations you can make around, you know, having multiple of these service nodes. And there's some cool tricks you can pull around using unicast IP addresses in your routing protocol to make them, you know, on the same IP address available anywhere. So yeah, and then you... Yeah, you want me to take it back over? Yeah, take it over. All right. This is a lesson in that when you deploy real clouds, you should have multiple redundant controllers with HA. I seem to be having a few problems, but it's back now. So hopefully that demo will come off at the end. So let's see, continue where we were. What are we talking about? Oh, so VXLAN, it turns out, is actually fairly well supported in modern Linux distributions. The VXLAN support first went in kernel 3.6, kernel 3.7, somewhere around there. I highly recommend you use 3.10 or later. We've seen some stability in performance problems with earlier kernels, but as always, your mileage may vary, depending upon who builds your kernels. One interesting thing to note about Linux kernel support, it went into the kernel so early that IANA had not yet officially assigned a UDP destination port for VXLAN packets. You know, like we talked about, you're basically sending a packet to a VTAP, and it needs a known port. So the IANA assignment is 4789. The Linux kernel module, by default, will register on 8472. Okay, that's not necessarily a problem if you've just got Linux box to Linux box, but a couple different switch vendors have implemented the VXLAN support now as well. They mostly use 4789, so be aware. There's a module argument you can pass to change that, but your default will be 8472. And then you need a fairly newish version of the IP route 2 command, 3.7 or later, that'll allow you to actually reliably configure your VTAPs. So here we have an example of how you configure it. Pretty simple. It's a lot like, you know, adding a bridge or a VLAN interface. IP link add, the name of your interface, VXLAN 1, 2, 3, 4, 5. You specify a type, so our type is going to be VXLAN. And you specify an ID for it. That's your VNI. And then remote and an IP address. This is how you configure a VXLAN, a VTAP, for a unicast endpoint. If you want to do multicast, there's a different, different option for that in the specify multicast IP. You would think, since we know what a multicast IP is, when you know a unicast IP is, we could just have one argument for it, but no, apparently the writers of IP route 2 decided we needed two entirely different arguments. Then you can basically, you know, set the device up and you can see here if you do an IP-D link show, dev, blah, blah, blah. You can actually see all the information there. You can see what your VXLAN ID is. You can see what your configured remote is. You can even see the port range. That's the ephemeral UDP port range that's going to be used locally by that interface when you're going to send out your UDP packets. Great. So how do we use this with OpenStack? I mean, it's cool, but we want to use it. Well, we know everyone's in here expecting to talk about Neutron and, and, and, and how use this with it. The truth is it mostly actually just works with Neutron. So, turns out we had a bunch of clients that were using Novenet already in existing cloud deployments. We've been deploying clients for about three years now. So we started really before there was a Neutron and a couple of them are now starting to do big data. They've got Hadoop workloads they wanted and the limitations of the L2 stuff were starting to cause problems for them. So we spent some effort to make this Layer 3 plus VXLAN topology work for them using their existing deployment scenario. So what we've done is we've added full VXLAN support to Novenet work. Turns out it's not nearly as hard as you might think it is because it's mostly just a couple IP commands. We've also, and MetaCloud created a Unicast VXLAN service for doing all the, the bum flooding for you. So this means you won't need to use multicast. So that VXLAN service node that I mentioned, it's a Unicast service node for doing that bum flooding. Gosh, I hate that term. It's kind of weird. So it eliminates the need for a multicast. It's a pure Python based solution today. And it has two components. There's the VXLAN service node daemon, which actually does that replication and flooding. And then there's a VXLAN registration daemon that you run on all your VTEP end points to basically keep the service node aware that the end points are there. And we're planning to open source this in the very near future. So what does the VXS and D do, the actual service node? So first off it listens for the VXLAN bum packets from VTEPs. So this is the thing that'll actually listen on port 4789 or 8742, depending on what you're using, and receive those packets. It passively learns from those packets as well. So based on those packets it knows now that there's a VTEP at a certain IP, because that was the source IP sent, and it happens to know the VNI because that's in the VXLAN header. So that's one of the ways it learns to know where all the VTEPs are. It then will take that, that bum packet and relay it to every other VTEP for the given VNI that it knows about in its registry. The service node also supports, on a separate port using a slightly different mechanism, being able to register that there is a VTEP for a VNI at a certain point, as well as supports, or starting to support, still very early, replicating from other service nodes. So you can have multiple service nodes that are basically replicating to each other. So you could easily imagine a scenario where you have two or three racks of gear. You run a service node on the top of every rack switch, say, and you have those replicate to each other, and now they learn about your whole network. The registration demon, it's really simple. This is something that you run on, say, like your hypervisors, your switches, wherever you have VTEP endpoints. Again, it's just a simple Python demon, looks at all the VXLAN interfaces that you have configured, looks at who their remote is, aka who would you be sending your bum packets to, and it sends registration requests there, assuming that there's a service node also running there. So Nolan, is that for you? So this is all, you know, pretty cool, right? Now we can have all these VMs talking through an ECMP network, you know, using tons of bandwidth, all those locality problems. So we still have one problem. We're going through a software gateway to get to the outside world. So all the traffic to and from the internet, or to physical devices, switch load balancers, routers, firewalls, etc., etc. has to go through this Linux box running, you know, fairly standard Linux tools, configured by Novenet or Neutron's L3 agent. But what it's really doing is it's configuring these VXLAN interfaces that Chet was talking about. It's adding bridges to bridge them to front panel ports. It's, you know, configuring NAT for floating IP addresses using IP tables. This is all very standard Linux networking functionality. So it would be really cool if you had a switch that was actually just running Linux and would transparently, you know, accelerate all of these features using a hardware forwarding ASIC. And conveniently, we have one of those right here. So this is a DLS6000, which is a 32 40 gig port switch. Now you can break those 40 gig ports out into four 10 gig ports, so you can kind of choose what percentage of 40 and 10 gig you want. And it's running seamless Linux. As I mentioned earlier, it's a Debian-based distribution for switches like this. Basically, you can think of it as a hardware accelerator. You configure, you know, the routing table. You add some routes to it. Behind the scenes, it'll go down and program the hardware to do that in hardware. So the vast majority of the traffic through this thing is being forwarded very quickly in hardware. So the operational model ends up looking very much like just a Linux server that happens to have 32 40 gig NICs. Now that might be a little difficult to pull off and practice, but that's what it looks like. The difference is it draws, I don't know, a couple of hundred watts and it's about a hundred times faster than that server. So I'd like to call out a couple of points here, but first we need to turn the switch on because it takes about a minute to boot and I apologize in advance. This is going to be very loud. We don't have the fan control loop for this brand new hardware platform yet. Hey Nolan, where's the power cable for the switch? It was here a minute ago. I can't have gotten far. Boy now, this is embarrassing. Anyone got a standard server power cord to the audience? Oh, hey, there it is. Okay. So I apologize. This is going to be loud. There's a slight bug with the fan control. So that's the sound of the enterprise right there. Anyone want to hear it up close and personal? So we had the, we have two little mini computers here running Linux. So one of them is going to be that service node we talked about. You could run it on the top of rack or on a spine switch, but for simplicity we just ran it on an extra server. And the other one is going to be the hypervisor and as well as run Nova compute and the database and all the various open stack services, except for Nova net, which is running actually on the switch. So when it configures things on the switch, it's actually programming the hardware here. Now this is a 40 gig switch. So these things only have one gig NICs. So this right here is a QSFP plus, which is the 40 gig standard to SFP plus converter. SFP plus is for 10 gig. This right here is an SFP, which SFP plus is backwards compatible to 1G base T adapter. Now the thing to note about this is it will actually negotiate all the way down to 10 megabits. So when I plug these together and plug it in there, I could make the world's most expensive 10 megabit ethernet port. Can you see if it's up? Yeah. Sorry, normally we would have prepared all this ahead of time, but as you saw we had a few issues. Okay there, the ports are up now on the switch. So Chet is now going to bring up DevStack, which we're using just for simplicity, here on the switch to bring up the Nova net server. Yeah, so what you're looking at here, it's DevStack starting Nova network on that Kemal Slinux switch over there. And we have a network that's pre-configured. We know we're going to be pretty short on time. So we have a pre-configured project with a pre-configured network using a VXLAN endpoint that is bringing up right now. Okay it's up, let's join the service group. And we're going to go ahead and bring up the compute node now. And hey, compute starts super fast. Great, so this is our horizon, well actually this is MetaCloud's UI. It's based on top of horizon, but we've done a lot of work in the past couple months to basically put our own skin on it and everything. And during the setup we were going to have all the pages pre-rendered and cached. But I mentioned the nooks are a bit slow. It doesn't do this and you're not in DevStack, I promise. And once again, I'm embarrassed on stage. There we go, I was about to worry. All right, so we have a demo project we've created here. And we'll go ahead and launch an instance. And we've put a little image that already has a web server pre-installed and a little site so we can, you know, see this pretty simply. So there we go. So it's building. I think everyone in the room is pretty familiar with how to build a VM. So we'll give it about a minute and it should be up and running. Okay, there we go. So now, like I said, we had a, we put a web server on here. We pre-installed a simple web server. So we went ahead and set our default security group to basically allow all ports. So, you know, we can demo this for you. But you can imagine you can specify whatever you want just like you can with a VLAN or a flat type interface. So now, what we can do is we'll go ahead and add a floating IP to this guy. So we already have a floating IP pre-created here. Apologies if this happens to be your IP at home. We are using SBC's DSL range. So there we go. So at this point, the IP address has been, the floating IP address has been actually brought up on the switch. The switch is going to be doing, if it works in a second, is going to be doing hardware accelerated NAT to a hardware accelerated VXL and VTAP pass down to the Linux box that has its own VTAP configured there. So if we're lucky, this will work. I love Marquis. I am not remotely surprised that worked. I'm not either. So there you go. See how we do it. Oh, we got about 10 minutes left here. So we'll go ahead and move on with presentation. So unlike, oh, hey, uh-oh. Yeah, PowerPoint crashed on us. At least it was an open stack. All right. Next steps. As we already said, turns out as we got started trying to do the XLAN in a layer three stuff, we discovered Neutron mostly just works for this sort of thing already. So because we had clients that had existing Novenet deploys, we mostly focused on Novenet. So what's next? Like I said, we have full the XLAN code. It's obviously working. That's shipping the Metaclouds product today. And we're in the process of writing up a blueprint that we really hope we'll get accepted to bring this into Juno. We've also created that, that the XLAN service demon. And, you know, there's a few things we want to improve there. We wanted to make it make the registration demon a lot more dynamic. We wanted to actually monitor the kernels netlink messages, which it turns out chemilsonic switches, as well as a Linux support very well. And then we can see VNI is coming and going and send those registration messages right away. That'll be a lot more dynamic and a lot quicker. We want to improve the concurrency and scalability a little bit. We have some ideas around, you know, basically being able to do things like fair share scheduling and that sort of stuff so that one particular VNI can't necessarily starve the others. Support for tiered replication. We have the very basics of replication, but you can imagine if you get a very if you get a large enough layer three deployment, you still need a traditional kind of leaf spine. It's just it's a lot more meshed together. It'll be neat if those top rack switches could replicate to some type of spine or core switch that's in replicating across your core. And then we also want to get this open sourced in the very near future. We have a goal of getting that done before Paris, but hopefully it'll be done a lot sooner than Paris. Okay. And then on the running Novenet on the switches side. So I'm going to kind of admit some stuff here. Basic in this thing can't it can do that and it can terminate the excellent tunnels, but what it can't do is route when terminating the excellent tunnel. So the only thing it can do out of the excellent tunnel is bridge to a front panel port or a VLAN on a front panel port. But a cumulus networks were very clever and resourceful. So we have a 40 gig cable here that you probably can't see it in the back, but it goes from one of the 40 gig ports and then right into the other 40 gig port right below it. And so what's actually going on here is that the VXLAN packet is coming in from the from the hypervisor being de-encapsulated, bridged out this port and then comes right back in on this port where it is natted. And then it goes out to the internet represented by Chet's laptop. Yeah, I've suggested to the cumulus marketing folks they need to call that cable the nap accelerator. Yeah, it's very affordable. I assure you no markup on that. So, you know, you could imagine kind of specialist deployments where this might actually make sense, but for the most part, for most widespread deployments you might want to wait for next gen A6 that we'll be able to do both routing functionality and VXLAN functionality on the same port at the same time. So there's another hack here. Cumulus Linux doesn't currently support NAT. So I'm going to have to be careful how I choose my words or our VP of engineering is going to kill me for promising features he hasn't committed to yet. But for this demo, I hacked in just enough NAT support for the exact kind of IP table rules that Novenet adds for the floating IP. So it's not a general solution. We will eventually add this NAT support properly. I'm not going to say when. And then the last one is the last caveat is that there's a limitation of this ASIC. It can only have 1,024 NAT entries and you need one for each direction. So you end up with only 512 floating IPs supported in hardware. Now that is a slash 23. And it is per switch. So it's not great, but I hear IPv4 is in pretty short supply today. So maybe that's enough for some deployments. And of course, if you're doing something with an internal cloud, if you're doing big data, that sort of stuff, you don't typically end up being floating IPs or you need very few. So from the perspective of Novenet running on a switch, doing the excellent acceleration with Layer 3 today, that all works and it is well supported. Do you want me to turn this off? Yeah, you want to turn off the jet engine over there? That's probably good. Okay, so if we've done things right and we have, we've left five minutes for Q&A here. One thing I'll mention before I turn it over for questions is MetaCloud is currently looking for a number of DevOps as well as software engineers. So if anyone is interested you can come find me after the talk and I'm happy to talk to you. So on that note, we'll turn it over to Q&A and I think we already have a line so let's get started. Can you tell us the IPv6 story with this? On the switch side, IPv6 is fully supported. Oh, it's... Okay, on the underlay, it's v4 only. But that, you know, those are hidden addresses so that's less important. The overlay supports any L2 protocol. You could run IPX through this if you wanted. I don't know why you would do that. But, you know, why not? Well, and actually, the underlay will leverage the existing IPv6 support in whatever software you're using for... The hardware doesn't support... Oh, my bad. Oops. Howdy. So we've all heard a lot of limitations on Neutron and, you know, if you really want bandwidth and whatnot, you go to Nova Network. Where do you see Cumulus Linux and MetaCloud in the coming up releases? They're actually eliminating that and actually running Neutron on the Switch and kind of having these accelerators everywhere. So I'll go first because I have a really... Yeah, I have a really easy answer. We don't care at all. If you run Novenet on the Switch or you run an L3 agent, it's all the same dust. It does roughly the same things. So we'd have to add network namespace support to get Neutron working on here, but that's the only thing I'm aware of that is required. From the MetaCloud side, so our current product is based on Nova Network. We're working on a... Well, this is the start of a next generation version of that that's going to be VXLAN based, L3 based, and that sort of stuff. And then after that, our plan is to start working on a Neutron one, but I don't have a date for when we would have a Neutron deployment, but it is in our roadmap to be what we do from a networking perspective after we finish cleaning up kind of all the stuff we've outlined here that we're currently working on for our existing clients. And just a quick note, everyone who asks the question, please come up over here afterwards and collect a t-shirt. We have both MetaCloud and Keamless networks t-shirts. Yeah, I think we have some stickers too, so whatever you guys want. Oh yeah, stickers as well. So I have a question about the selection of the central service node for the VTAP. So why you use like a single, like a service node instead of a multicast and what happens if you have like many VTAP endpoints and how are you guys going to scale? So, yeah, you want to take a multicast part? As a practical matter, I've seen very few customer sites that actually use multicast, especially in a routed network. You occasionally see it in L2 networks, but it's, you know, the multicast routing protocols are not nearly as robust as the unicast ones. And just as a practical matter, if you want to run in real customer networks, you have to have a way to work without multicast. From the scaling perspective, that was kind of a two-part question there. So there's a couple of different ways you can do that. One, that's one of the reasons we're working on that notion of the replication code so that you can have multiple service nodes that are running. They're aware of the full VTAP map but are only directly servicing a limited number. Also, you have ways with the VXLAN configuration whereas you can actually configure, I think this is actually released in the IP route and VXLAN code from the kernel, but it is possible to have multiple addresses you send to. There's no limitation on that. There's either a patch that's like pending to have that to Linux kernel or it's already there, I can't remember. So you can also do things like be able to send to different ones. So the idea would be you would take the number of servers that are in one rack and you would replicate those to a pair of nodes that run on, say, a pair of top of rack switches. And then those top of rack switches would then be able to replicate to a spine layer, if you will. And then those would know how to replicate to other nodes. So the idea is to try and tear it, basically, so. I'm just going to say it might be interesting to know that we've got combined between Cumulus and MetaCloud, we're looking at releasing this in two production Hadoop environments over the next couple months. This is Sean Lynch, who's the CEO of MetaCloud who is having to mention, you know, as I mentioned earlier, we've been working on this for some existing production clients. So we have two clients who are looking to deploy their Hadoop workloads on this. I mean, you guys tried to do with VXLAN encapsulated with IPsec also, and is there anything in the neutron where there will be a state tunneling happens? So the question is about IPsec and VXLAN encapsulation? Right, with, yeah. Not translation, let's say, on top of VXLAN, or I want to do IPsec. Right where it's in the light, L3 starting right from the compute node. Okay, so there's no limitation with VXLAN the way it works, there's no limitation on if you want to put another encapsulation on top of it, you can send it over GRE tunnel, open VPN, IPsec, whatever you want to. You're up to keep stopping the MTU down as you add more headers to that. So it should fully work. We haven't implemented anything like that. I don't know if the switch has any limitations on anything like that. The switch would not be able to hardware accelerate. From the neutron side, I mean, not... So if you ran IPsec inside the tunnel, the switch could send it out. They would still be encrypted when it came out. Yeah, switch's got no issue, yeah. If you run it on the outside, the current ASICs don't support encryption on the wire. So I'm getting the time note from there. So final question, and then we'll have to wrap it up. All right. So I've been looking at that flying turtle. It's great to know there's a way to get one. I mean, I wear a T-shirt. I think we need one more question after that. So all right, I actually have two questions. One's real quick. I heard, I think I read somewhere that the cumulus also support open flow, and I don't know if that's true. That's a simple question. And no, let me answer that one first. So cumulus Linux does not currently support kind of what the open flow protocol. We do support OVSDB the protocol as part of our integration with certain controller-based overlay network virtualization products. Okay, all right, thanks. So that's time. I'm sorry. It's just a real quick one for you. So the second one, actually, I understand that cumulus is actually open. It's like full in its platform. So that actually, you have capability to add more application service or network application service. Is that on the roadmap? You know, there's something available right now? Or is that related to the open stack? Adding more network services? I guess it's an open Linux platform. So you plan to add more besides just the switching part, the networking part. I mean, this is switching and routing in VXLAN termination. You can run ISD, DHCP on there. You can run DNS servers. You can run anything you can run on the next server on it. Okay, cool. We have plenty of extra T-shirts and stickers up here. So if anyone wants one, first come first serve once this gentleman gets his brass news question.