 Hello. Hey, welcome everyone. Thanks for being here for this talk. I know it's the last day of the summit. It's a Friday. So you must be some hardcore stackers, right? And that's about neutron hybrid mode. Anyway, so let me give a little brief introduction about the talk. A lot of you have seen a bunch of paypalers give talk about how we've been deploying OpenStack in PayPal for, like, more than a year. So we've been running into a lot of unique use cases and that we wanted to share with you. One of them was we wanted to run overlays and bridged mode VMs on the same hypervisor. So it was a little bit of a challenge. Traditionally, people run either overlay or bridged and run them as separate networks, but we wanted to combine both of them and share those experiences with you. And then, towards the end, I have some performance numbers that I'm going to show for bridged bare metal and tunnel VMs and, you know, these tests were done with a lot of background traffic, but, you know, we can perhaps get some takeaways from that. So you already know about PayPal. I'm not going to go through this slide, but this basically tells... The only point I wanted to mention was the 137 million active users that's active. So we have a lot of registered users which is much higher than that. So the structure of the presentation is set up such that, you know, I'm just going to give a little brief introduction to the data center architecture that most of you might already know. And then, a little details about Neutron Basics. And I think, by now, most of us know what Neutron is capable of doing. So I'm just going to briefly touch on the basics. And then a little introduction about overlays and physical networks. And then the use cases that led us to this hybrid solution. And I'm going to show you some performance data that I ran in our live data center with tunnel VMs, bridged VMs, and bare metal. And maybe we can do some analysis of that. And then if you have enough time, we can go through a quick Q&A. So this is our standard data center architecture, right? I mean, this is no different. Most companies do that. This is not a representation of what we do at PayPal, but it's to the scale. It's like a miniature scale. This is how it looks like. So we have a core layer. We have an aggregation layer. And we have an access layer. What we mean by access, that's our top-of-the-rack switches. So racks, which have hypervisors or compute nodes are connected through the access. And we usually dual-connect them for redundancy purposes. But the new data center architecture looks like this, with v-switches and open stack. So what it does is, in addition to the access layer, which is a Tor layer, the top-of-the-rack layer, below that, we have a bunch of v-switches now. These are switches which are sitting on a hypervisor. So if you just do a simple math, at least from our data center point of view, we have a 50 to 1 ratio. For every one physical switch, we have 50 virtual switches. This is like if you're running in a non-redundant mode. If you run in a redundant mode, we have like 25 virtual switches to a single physical switch. That's a lot. So what it does, what it tells me is that I have this intelligent edge switch that's sitting on my hypervisors, right? We can do a lot of great things. We can do distributed firewalls. We can do security groups. We can do ACLs. We can do tunnels and all kinds of stuff. So I want to focus on that. So if I look at the same picture from a different viewpoint, again, the top layer is the top of the rack switch, and then you see all the hypervisors connected, and you see the v-switches on them, and you have all the VMs running on them. So the VMs connect through bridge or overlay mode, whatever or the networking format that you use to connect your VMs. And they can also connect across VMs different racks using the top of the rack switch. So this is a neutron-basic diagram. I apologize. I realize I didn't have time to change. This is an old slide because it says quantum. It's neutron, so I apologize. But basically the various components of the open stack are still intact. It's just that we have cilometer and cinder and all that stuff, which we don't see here. So the basic, the way neutron gets orchestrated in the open stack and how it interacts with Keystone and Nova is all shown here. So let me talk about overlay networks, right? Most of you already know, but I just want to go through that so that we set the stage for the problem that we are trying to solve at PayPal. So overlays provide connectivity between VMs and network devices using tunnels. And the cool thing about this is the physical infrastructure, which is my network switches, routers. They don't need to be provisioned because as we saw in the picture about the virtual switches, the intelligence is in the v-switches, right? So my top of the rack and core routers end up being more or less be dumb and provide layers of connectivity with ECMP. So yeah, as I said, the tunneling end cap, decap is done at the virtual switch, the edge layer, as we like to call it. And it also helps us in the tunneling mode, the tenants network addresses decoupled from the provider space, right? So that it allows me, as a result of that, I can have overlapping addresses. So there is a use case for us where we want to support that in PayPal. So today, there are a lot of tunneling protocols that you can use. The ones that are in work these days are VXLAN, STT, and NVGRE. So my talk is going to focus mostly on OVS. And just to set the stage, we use NYSERA for our plugin for the SDN controller. So the results that I did are mostly with STT, but I want to try it out now that we got ML2 plugin that Kyle and Bob talked about this morning that will help us try various tunneling techniques. The physical networks also call us provider networks, right? These allow you to connect VMs and network devices using provider networks, and I like to use the term first-class citizens. The VMs sit on the hypervisor, so they are on par. What I mean by a first-class citizen is the IP address of the VM is at the same level as the hypervisor. There is no tunneling here. The only thing that might happen is you might insert a shim layer like VLAN. At the IP layer, they're sharing the same address space. So there are no tunneling protocols excepting for the VLANs, and a lot of times the tenant separation is achieved by using VLANs, or if you don't want to use VLANs by using IP subnet, but that gets complicated really quick. What we found out was using provider networks or physical networks is hard for us to create isolation. You can do isolations using VLANs, but then you have to run VRFs and all kinds of stuff, and then you need to configure your routers, and you need to create all these domains. When you start doing that, then you're putting complexity back into the network. The whole problem is to take it away from the network and put it in the edges, right? So it's hard to do overlapping addresses with VLANs, but some people have tried it and they can make it work, but we didn't want to go down their path. And also, one of the limitations or at least from our point of view, we find it to be a... which causes friction within our organization is when we use VLANs, it's not just edge switches. Now I've got to configure my top-of-the-rack switches, my core switches with VLANs. So usually the cloud team is different from the network security team. So we need to open a ticket, and there is an SLA for that. So it kind of takes away from agility, right? So I just wanted to show how our networks look like. So we run both overlay and physical on the same cloud infrastructure. So what we do is we have a bunch of racks which are our hypervisors. And on the left-hand side, you see the tenant which is on a physical network and a tenant on an overlay network. So the tenant on the physical networks directly plumps through the top-of-the-rack switches into the hypervisor's V-switches, right? V-switches, the only thing they might end up doing is insert a VLAN. Or if you're using flat mode, not even that. In the case on the right-hand side, we have a tenant on an overlay network. So they go through a virtualization layer, and the virtualization layer is... the intelligences exist on the V-switches, and that provides a tumbling. So this is kind of... I want to spend a few minutes on this slide. So this slide is talking about the pros and cons with the various techniques, right? So I just wanted to... On the top row, you see the... I'm comparing pros and cons of running a pure hypervisor. So you don't do any VMs versus VMs which are running in bridged mode. When I say bridged mode, it's using either flat network or VLAN, and then tunnel VMs, in our case, STT tunnels, and the VMs running on that. So what we found out was... If you look at the various features, if you do the pros and cons, the throughput-wise, the hypervisors tend to be best, right? Because they're running bare metal, there's no hypervisors involved, and the whole packet hit the NIC card, and they go out. And the bridged VLANs, in a scale of worse, better, and best, they're somewhere in the middle. And the tunnel VMs are supposed to kind of have the worst performance. And the reason I highlight that row is because in the results that I show, you get slightly different results, and that's the reason I just wanted you to focus on. So when we get to the slide, we'll take a look at it. And the latency-wise, if you look at latency for a packet to go from a bare metal hypervisor to another bare metal, you get the best performance. And it degrades when you go to the tunnel VMs. Now, when you talk about flexibility, right, we need flexibility. For example, I need to... Don't have to touch my physical routers and switches to make something happen, reconfigure my network. So that is best achieved if you use tumbling VMs, right? With bridge VLANs, you can do that, but it's like, again, as I gave the example of the VLANs, I don't need to still go configure that. Now, with hypervisors, you don't get much flexibility like running it on bare metal. Now, if you want to support overlapping IP addresses, the best solution is to use tunneling protocols. You can do that with bridge VLANs, but again, as I mentioned, you don't want to run into situations where you're doing VR apps and things like that and it gets putting complexity back in the network. And then the operational dependency, this is what I mean by our organization, which I'm part of a cloud team, has dependencies with our networking team, right? So that's what I mean by organizational dependence. Sometimes some organizations have figured out how to make it work. Some haven't because we've been around in this business for a long time. So there's silos in our organization. So things work through tickets. So that also causes... So that's what I mean by operational dependency. So if I use tunnel VMs, I have the least amount of dependency with other siloed organizations in our company. So given... Now that we have gone through the introduction of the overlays and tenants, so let me talk about our use cases. So we run OpenStack in different environments. The first environment I want to talk about is the production environment. So production is what serves our PayPal website. So we have a web tier, a mid tier, a data tier, all that stuff, right? So it runs across multiple data centers, and our requirements are to have a low latency, high throughput. I mean, who can say no to that, right? So we decided we're gonna go with bridge mode because if you look at the slides where I did the pros and cons, bridge mode tends to have the highest throughput and low latency. Then we have another production, another environment called M&A, mergers and acquisition. You know, PayPal has been in this business for a long time, we acquire companies, small, big, large, and some of them run their own private clouds or run in Amazon or Rackspace. We are, since we're building our own cloud, we would like to bring them back into our cloud. The reason is so that we can take advantage of our NARC monitoring tools and avail of the redundancy and availability zones that we have with all our data centers, right? So we run that cloud too, that's called M&A cloud. So here we might have to support overlapping IP addresses, right? Because when we buy a company, we ask them to come in. We can't tell them to go renumber your IP addresses. We should have the ability to support that. So that is very paramount. That becomes a very high priority for us to support. Again, they also have the same need. They would like to have low latency and high throughput. But these are like, you know, so with these requirements, we decided we're gonna go with an overlay mode because overlapping IP addresses and flexibility is very of paramount importance to us. Then we have the third environment, the dev QA environment. This is where our PayPal production team, sorry, product development team go. They all have an account. They log in, spin up their VMs, write their code, test it, and they have this QA environment and stage environment they're running. So it turned out we ran into an issue here. We thought we're gonna run it all using overlay mode. But in a QA mode, we felt that because of some constraints that we had, we had to run them in a bridged mode, meaning fixed IP, floating IP, didn't work for us. But the developers were gonna run an overlay mode, but the QA was gonna run in a bridged mode. So this is where we came up with this concept of, how do we run both bridged and overlay mode in the same OpenStack cloud? So this was our use cases. Now, the problem statement, right? We want to support flexibility. We want to have low latency, high throughput, all this usual stuff, right? No one says no to that. And we also want to support both bridged and overlay. And the VMs that get spun up on Hypervisor should have the freedom to pick, oh, I'm gonna be an overlay network or a bridge network. You don't want to restrict that. And we need to have a consistent deployment pattern. What I mean by that is we use Puppet for configuring all our hypervisors. So imagine if I had an overlay mode for my M&A cloud, bridged mode for my production cloud, then I'm doing one-offs. Then it becomes like my deployment pattern is not consistent. So if I have a mechanism where I can configure my OBS switch or V-switch so that it can support both, then we can deploy that. And if you want to use the bridged mode, use it. If you don't want to use it, use the overlay mode fine. There's no penalty. There's no overhead, right? So I want to spend a few minutes on this side because this is the key to what we did. So this is how the hypervisor, the outline, the big rectangle that you see, that's our hypervisor, right? And it has a management interface on the left-hand side. That's where we run our OpenStack API in-band and out-of-band management. On the right-hand side, you see two copper ports, which are 10-gig ports. We usually bond them in active standby. This is where our production traffic goes through. Our management traffic and production traffic are in different interfaces. So we bond them, like using Ethernet bonding, active standby. And we all know VRN, the integration bridge that gets created by OpenStack and all the VMs land on that, right? And the IP address for the production interface is configured on the bond interface. And the management has its own IP address, but we're not interested in that because none of the tunnel traffic or the bridge traffic ever goes through the management interface. So along come two tenants. VM belonging to tenant A, VM belonging to tenant B. They both have requirement that they have overlapping IP addresses. They don't want to share. I mean, they're overlapping, but they want to have isolation. So in a typical environment, what happens is they land on the VRN. And then we have a tunnel bridge. Well, I'm taking some artistic liberties here, right? If you look at the STT model, it has ports for each of the hypervisors, but I'm going to try to encapsulate that as like a separate tunneling bridge. So the traffic comes in, the two VMs traffic. You know, the VRN uses some kind of a VLAN to do separation of these two tenants. And the traffic goes down to the VR tunnel. And this tunneling bridge, and there's a dotted line, right? All the VR tunnel does is it says, based on the VMs traffic, looking at the ID, it figures out which destination hypervisor is supposed to go and then puts it in cap and sends it out using the source IP address of the bond interface. This is how it works. This is the standard stuff. This is how OpenStack and Neutron work for overlays. Then I have a situation. And now I've got a third tenant coming along, and he is on tenancy, and he does not want to use overlays. He wants to use bridged, and he's on VLAN 200. So what we decided was to solve this, and he's on the same hypervisor, right? So what we decided was we created another bridge. What we did was on the bond interface, we created a VR bond and moved the IP address on the bond interface to the VR bond. And now the VR tunnel talks to that, and the VR int has a straight plumbing into the VR bond. So with this, by adding the VR bond, extra bond, we were able to configure this so that we can... I mean, this is not very complicated. This is fairly straightforward. People who have been playing around with Neutron have done that, but this is, you know, a real use case that we're using today. So that's how we did it. So I'm happy to pause here for a couple of minutes if someone wants to ask any questions. You can spend some time here now because, by the time I go to the end of the slide, so any questions? Could you say that again? The question was for making changes to enable this. Did we have to make any changes to the Neutron? No. It's all configurations, so... So we did not do anything complicated. We just played around with OpenB switches. We know what it's capable of doing, and then we had a helpful vendor who was working with us, and we figured out how to do that. So that's how the bridge traffic goes through. Now, there are a couple of slides which goes into a lot of details about how we did the configuration. I'm just gonna walk through them, and, you know, if you get too much, you know, just bear with me for a couple of minutes. So what we've got to do is, when you create a flat network, you create the network. So if you see the first command is using Neutron, we create the bridge network, and you specify, you know, it's a flat network and provide the FIS network, you know, the physical network, and then you create a subnet, and then you specify your gateway, and if you want DSCP and all that. This is the standard Neutron commands, right? If you want to use a VLAN command, you do something similar. You provide the physical network, the FISnet ID, and the segmentation ID, and specify what VLANs are part of this network, and then, you know, you create your VLAN subnet. Now, okay, animation is off here, so... Now, when you want to create an overlay network, so in our Neutron configuration, the default mode is to use overlays. If you do not specify, it's gonna pick an IP address in the overlay network. So unless you tell when you boot up the VM, when you spin up the VM, you need to tell, no, I don't want it to be an overlay. I want it to be in the bridge. So this is how you do the overlay network. On the compute node, we have to do a little bit of... Is it all the commands that I'm just showing? By the way, I'm gonna put these all slides up, put it up on the... It's still not on the site yet. I'm gonna do it right after the talk. So if you want to take pictures, that's fine, but you'll get the slides too. So what we did was we added the BrBound zero, and we configured the OBS, and these are all the commands that you... I don't know if anyone has attended Justin's talk on OBS, the deep dive yesterday. It's pretty cool. OBS is like, as I said, in the survey, there are 48% of the people are using OBS, right? And we happen to be one of those. So you use all these commands, and it has a lot of cool tools for you to look at the flows, inspect flows, and things like that. So these are all the commands that you need to run on the compute node to enable the setup that I showed in the picture, right? So now that we got the hybrid mode out of the way, now we wanted to validate to see, okay, we made all this assumption saying that, oh, my bare metal is gonna have the best throughput, lowest latency, my tunnel VMs are gonna have the worst performance. No, we made all this assumption, but we need to validate that. So I said, okay, let's run some tests. So luckily, the data center was just getting built up. This was like an early second quarter of this year. The data center was already built. We were building up this cloud. We had all these racks there. So we said, let's measure. So I was looking for two metrics, throughput and latency. So what I did was I came up with three scenarios. Run, test, within a rack. Because within a rack, we run a layer three. So hypervisor to hypervisor communication is a layer two, meaning there's no layer three hop. It goes to the top of the rack switch, comes back, so it's a layer two switching. We wanted to run throughput and latency between bare metal to bare metal, bridge to VM to bridge to VM, and a tunnel VM to tunnel VM, tunnel being a SGT tunnel in this case. And the second test I wanted to do was across racks. And across racks, the reason I wanted to use across racks was now it's going through the distribution layer and taking a layer three hop. So a lot of east-west traffic, when VMs want to talk to other VMs, if they don't happen to be in the same rack, they go through this layer three hop. The same three combinations, bare metal to bare metal, bridge VM to bridge VM, and tunnel VM to tunnel VM. And the third thing, but since remember we have an overlay network. So if I'm deploying an overlay network, I also have a layer three gateway somewhere, which is my gateway between my virtual cloud to the physical cloud. So we wanted to run some tests there. So bare metal to bare metal. So the thing about bare metal to bare metal, it doesn't go through the layer three gateway, right? It's not really a fair comparison, right? And then bridge VM to bare metal. Again, it doesn't go through the layer three gateway, but I just wanted to see what are the results for these. And then tunnel VM to bare metal. This one happens to go through the layer three gateway. And the reason we wanted to do was, if I did, if I deployed everything using layer three, I have to go through layer three gateway. If I did not, then I don't have to go through the layer three. But what are the throughput issues and latency? But this test, I ran into a lot of issues. So even though I got some data, I'm not going to present it because I want to be honest, right? I'm not trying to put anybody down. So we have a vendor that we're working with. So I'm going to go back and try running this test again. So if anybody is interested in working with me on this, you're more than happy. I'm more than happy to collaborate with you, send me an email, and, you know, we can start this thing out. So a description of our setup. So we have a computer hypervisors. They're HPs. I think they're called ProLiant, SL230260. And these are like massive servers, right? They have 16 cores running at 2.6. Sandybridge have two 10-gig NICs, Intel-based NICs. PCIe has 256 RAMs. And we have 96 OS in each rack. And we run all our hypervisors today in PayPal, run RHEL 6.4. That's what has been approved because we are a PCI company for payment cards, industry, and specifications. So there are some hypervisors which are approved. So RHEL 6.4 is approved, and that's what we use. The VMs that I used to do the testing, I used the same consistent VM. This VM had two virtual cores. It had 8 gigs of RAM. And it was running RHEL 6.4. So this is how our test setup looked like. I had two firewalls. We have firewalls everywhere in PayPal. You can't turn any direction without hitting a firewall. So there are firewalls there. And then there's the load balancers. And then the core routers and the distribution layer switches. So I'm not showing the top of the rack switches here. They're assumed they're part of the rack. So we have, they're all, each rack connects to two different distribution switches. This was a pretty small deployment that we had. And I was lucky enough, I thought we had this setup. Why don't I run some tests, right? But you'll be surprised how many problems there aren't. And so we have racks. And then we also have a separate rack on the right-hand side. If you see, it says layer 3, gateways for overlays. So the devices that we use for doing the layer 3 gateways which go from virtual to physical world, they're actually, even though they run on compute nodes, they're not really compute nodes. They're routers, right? So they need to belong. They need to go up. So we did not put them in the compute racks. You know, the cattle puppy story, right? They don't need to be there. So they need to be away from that. Because these, those two compute nodes need to have fiber access. The 10-gig ports need to be fiber. So all our compute nodes are copper. So we didn't want to mix and match. So, so testing methodology. Let me see the time check. Okay. So the tunneling VM, as I said earlier, we're using OBS, STT and OBS. The bridge-to-VM uses flat network. We did not use VLAN. You know, just to keep the complexity down a little bit. And I used NTTCP 1.47 for throughput testing. The reason I used NTTCP is like it takes the disk latency out of the equation. It's just testing pure network. I did not transfer a file because then I have to read the disk. Then the disk latency plays into the equations. I don't get a clear picture of what's the latency in the network. So NTTCP just sends like 10 million packets in both directions. And then what I did was I used only TCP testing. I didn't do UDP. And I ran buffer size, the non-MTU buffer size from 64 bytes all the way up to 64k, right? It increments, you know, our powers of 2. 64, 128, 256. And the MTU size on my VMs and my hypervisors, they're all 1,500 bytes. And for latency measurements, I ran ping. And I collected 60 samples at a one-second interval. And then within that set, we picked the min, max, and the average. So we'll show these results. And I used Python scripts to automate all these tests. I'm just sending 10 million packets. All this combination takes a while. So unfortunately, when I thought I was going to run this test, I was expecting the whole setup to be pristine. No one else is there. But it turns out the cloud is very popular in PayPal. As soon as we built it, there were like 470 VMs already there. So unfortunately, I was not able to run it in an ideal test situation. But I think that's fine, though, because this reflects real-world situation anyhow. So there's a lot of background traffic going on. So that might show my results maybe a little off. But I figured I'd still better show the results instead of showing you pristine results, right? So we had around 100 hypervisors in this whole network. And we were deploying them in half-racks so that they were all in different... We... Our racks had broken up into half-racks so that we put in different latency domains. So the first setup is testing within the same rack. So we had two hypervisors, same rack. Even though I show the arrow like they're directly talking, they're actually going through the top of the rack switch, but it's a layer 2 switching. No, layer 3 hop. So this is what we got. The testing result. I'm sorry, you couldn't see the legend at the bottom, so I'll try to fill up. So that red is for tunnel VMs. Blue is for my hypervisors bare metal. And the green is for my bridged VMs. So on the X-axis, X-axis what I have is the buffer size that I was using. 64 bytes all the way up to 65. And on the Y-axis you see the throughput. So at the top where you see the green line, that's out of 9 gigabits per second. And this is like bi-directional, right? This is I'm just showing the TX and you'll also see the RX. So but let me just... So if you notice, the two things that pop up from this. One of them is like for packet... For buffer sizes below my MTU size, 1500, you see this little step up and then destabilize. Before, for buffer sizes less than 1500, I see that my tunnel VMs outperformed. The red one, which is the tunnel VM, they outperformed my bare metal, my bridged VM. So that's the reason I underlined my row in the pros and cons because you would not expect that, right? So there's a reason why we'll get into those reasons. And so the thing that I still don't know why this happened is I see the bridged VMs continues to outperform my bare metal and tunnel VM even after the MTU size is 1500. I don't... It should not. My all knowledge I have over networking says that it should not. So I need to figure out why. So I'm gonna ignore that for the time being. Maybe I have to run it again, so... And this is the latency. So what I did was remember I said I was running 60 samples at a min, max, and an average, right? In that sample. So the three clusters, the left-hand side cluster, the min, the cluster on the right-hand side is the max. The one in the middle is the average. So if you look at it, the blue ones indicate the ping latency for the bare metal. As expected, it's lowest, right? Bare metal to bare metal. Then I see the tunnel VMs tend to have the highest latency. But if you look at the max on the tunnel VM, it just like goes crazy. There's a reason for that. It's because the first time... Because this is, again, open. If you attended Justin's talk yesterday, you'll know when you ping for the first time, the tunnel does not exist. It gets kicked up to the user space, and it drops a flow. So you take a hit for the first packet. So that's what that is. So ideally, if I were smart, I would drop that sample out. But, you know, at least I'm setting the context so you know what it is, right? The average looks more reasonable now. So the analysis of this, right? So the observations. The results, as I said, for less than MTU size, the tunnel VMs running STT tended to have the best performance. And the reason behind that is, you know, all our NICs have offload capability, like large transmit, large receive, offloads, and segmentation, offload, and check some offload. So when you send packets less than the max MTU, what happens is, open V-switch does buffering, and it's taking advantage of... So that's the reason you, perhaps, you get a longer, higher latency also. So it sends out larger segments out, the NIC card. Whereas the bare metal doesn't do that. You just sense it out. But once you hit the 1500 packet, the size, the 1500 MTU, if you notice, the performance for the bare metal is same as my tunnel, because both of them are not taking advantage of the NIC. And it's because OVS is doing these little, smart little things in the kernel. So if you tend to have a lot of your traffic under the MTU size, you might see a slightly higher performance because you're happening to go through STT tunnels in OVS. So I did the same test across racks. But unfortunately, I was not able to spin up the bridged VMs across these two because for whatever reason, we ran into some issues. So I was able to only compare tunnel VMs versus bare metal. The same results. I see that for MTU, for the buffer size, for the NTTP traffic, less than 1500 bytes, you see that the tunnel VMs outperform my bare metal. Right? So I'm happy to take any questions at this time before we go further. MTU. That's a default. Well, OVS what it does is, that's advantage when you send smaller packets, it'll accumulate them, and it'll monetize the overhead on a longer, bigger packet because it's a TCP. It can do that. But my TCP is using a segment size, right? The buffer size is... I'm using TCP. TCP doesn't have an MTU. It's a byte. It's a stream, right? Yeah. So it's not UDP, right? Okay. So I see the same performance. And I think this again goes back to OVS and STT doing their little massaging of the data on its way out to get higher throughput. Again, I see the same results. Latency for tunnel VMs tend to be a little on the high side. Again, ignore the max thing because that's a one-off. The numbers on the left, 0.5, the first line is 0.5 milliseconds, and the one at the top is 3.5 milliseconds. Is it all in milliseconds? I'm sorry. I should have... I thought it was going to be big, but it obviously... It's still fast. Okay. Yeah, it's still fast. I mean, it's not like... You're not talking hundreds. Are you in tens of milliseconds? So my bare metal takes around maybe 0.1 millisecond, and my tunnel VM is taking like maybe 0.25 milliseconds. Like in maybe 2X for the min, and for the average also is like in maybe 3X. Right? So again, analysis of this results. We had no bridge VMs because we had a test setup issue. Results for buffer size less than the MTU size. Again, the tunnel VMs tend to have better overall throughput. Again, this is because OVS and STT tunnel optimizations taking advantage of the TCP offload for large segments, receive and transmit and check something offloads. When the buffer size is increased and it's greater than the MTU size, the results kind of like in all the bare metal tunnel, they all kind of like stabilize at the same numbers. And as I said earlier, right, we have the tunnel VMs tend to have on an average higher latency like around 2X, 3X, then bare metal. Now, this setup which I said earlier where I was going to test across latency gateways, I ran into results, but I'm not going to show this thing because I want to be honest because they're off. I had trouble interpreting what those results were and I'm trying to sort those things out. If anyone wants to collaborate with me on this, more than happy to send me an email and I'll be happy to work with you. And we have racks and racks of compute servers so we can do all these tests. Now with ML2, I can do it also for VXLAN and GRE and all that stuff which will make life easier. So basic analysis, we went through this analysis again. So I want to quickly go to conclusion of future work. Basically when you're trying to deploy, you try to understand what you need out of your network. You want low latency, high throughput, things like that, flexibility. Now, these are all like there is, you cannot get everything, right? There is a, you need to give and take some. And so based on that, you can come up with a solution and that's what we did and we ended up coming up with a hybrid mode. And you know, also anytime, before you make an assumption saying that tunnel is going to be better or not worse, always run your verify, run some performance tests and make your deployment pattern simple, straight. Even if you're not using the overlay mode, put the capability in your OBS so that who knows you might end up using it. And future, what I would like to do is expand my performance test across layer three networks and also do for various tunneling schemes like VXLAN and VGRE. So right now, the way Neutron is set up today is like, you know, I have one plugin model. I cannot change. So hopefully once we, with ML2 plugin that's coming out, I can take advantage of that and hopefully run some VMs running VXLAN, some running GRE and whatnot. So we'll see how that thing pans out. So again, as I said, if you want to collaborate, I'm more than happy to work with you guys. Send me an email. I'm happy to take any questions. Thank you.