 Good afternoon. My name's Carl, and I'm the Cloud Solutions Architect at MetaCora, and I'm here to talk to you today about MetaNet, which is our overlay-based network virtualization product that integrates with OpenStack. The reason we do it overlay-based is to make your lives a little bit simpler. You can do whatever you want on the physical network, and we'll take care of all of that and turn it into a RESTful API that simply plugs into OpenStack. So how our system works is you have to have IP connectivity between all of your nodes. It doesn't matter if it's routed or layered too, which is cool. As long as you can communicate stuff back and forth with IP, we create GRE tunnels between hosts to make everything work. We install a program called the MetaNet Agent on each one of your hypervisors and your border nodes. And unlike other virtualization products that you might see, we don't have a centralized controller. Our entire system is distributed and designed to be fault tolerant. So we have a network state node. You have three or five of them, and they're basically just Cassandra and ZooKeeper. We put the information in there as to what your virtual topology looks like, and then that's queried on demand by the MetaNet Agent to generate the GRE tunnels to make things work. So let's give you an example. If we spin up four VMs, two underneath the tenant named Alice and two underneath the tenant named Bob, it looks pretty familiar in Horizon. So this is the Alice tenant with two VMs, and this is the Bob tenant with two VMs. Now by default, all tenants are isolated inside of MetaNet. They get their own router, they get their own network, and they can't talk to each other unless you specifically go in and tell it to. In this demo, what we've done ahead of time is we've actually set up a link between Alice's router and Bob's router so that you can transverse traffic in between them. And then during the demo, we're actually gonna break that connection so you can see that it instantaneously changes. So let's talk about what happens when a packet enters MetaNet and what that entails. First thing that happens is you have a packet that comes in from, let's say Alice1, that's going to go do a Google public DNS query. So the hypervisor is gonna get a packet in from the VM that's going to Google public DNS. Our OVS kernel module is not going to have a flow for that, so it's gonna bump it up to the MetaNet agent. Once it's at the MetaNet agent, the agent's gonna look at that packet and go, okay, I need to know, find out what the topology for your network looks like. What's your virtual topology looks like? So I can figure out how to route this packet. So that's a lot of words that basically falls down to this picture. So you've got a VM that's plugged into a tenant network that has a subnet associated with it, that's plugged into your tenant's router. There's what we call a provider router. This is the external router, if you're familiar with the quantum interface. Although with our system, it's actually built into our network. That router is what connects all of your tenants' canary and what connects you to the internet. So once the system has that topology in place, it can now inspect that and simulate what's going to happen when the packet flows through all of that topology. So there's a couple of things it's gonna need to do. Every time the packet passes through a router, we're gonna have to obviously decrement the TTL, but we're also gonna have to do a network address translation to get the packet out. So we're gonna write all of those rules into a flow in the Linux kernel and keep it there so that it's in the data fast path. Now all of the additional packets that are related to that flow simply go through the fast path and don't come back up through our metonet agent. After it's done doing those translations, it puts it into a GRE tunnel and sends it on its way to a border node. The border node will pick it up, look at the key ID on that GRE tunnel and go, oh, this is destined for the internet. I just send it on its way. When you get a packet coming back, it creates a whole new flow. So your border node gets very excited because it has a new packet coming in. It's gonna do that exact same lookup to look up all of the topology. Gonna figure out the exact same path that it needs to do, do the same transformations on the packet only in reverse order and then send it through a GRE tunnel back to the host, at which point the host will get very excited, send the packet to the VM. Now the good part is only the first packet is actually inspected by our metonet agent. All additional packets on the same flow simply go through the kernel fast path module so the overhead is very low. And here is a demonstration of kind of what that looks like. So what we're doing here is, I'm just gonna show you the IP address on this host. This is Alice1. And then we're going to ping Alice2, which is on the same internal network. So obviously you would expect that to work. We're also gonna go do an HTTP request against it with curl. So it comes back and tells you that it's Alice2. That's lovely. And then we're also gonna go curl, majorHadens, lovely, icanhasip.com to show you that we have a connection working to the internet as well. So all of what I just talked about happened there. And as you can see, there's not a whole lot of delay. On this ping, that first flow, it took us an extra 16 milliseconds to go find all of the network topology, figure out where the flows are, instead of those things in the kernel module, and then you're basically running at line speed. A little bit less, because obviously there's a encapsulation in place, but otherwise not too bad for getting in and out of the network. So the other thing we're going to do is, we're gonna send a packet in between two different tenants. And this is what I said before, we have each tenant has its own router, and then we've created a link, a virtual link between those two routers inside of the MetaNet control panel, which you'll be able to see here in a moment. So we do the same thing. We go query the network state. We look at how all the routers and switches are plugged together. Do the same sort of transformations. And then once we've gathered all that information, we send it in a GRE tunnel directly to the other host. It'll pick it up, go, I know what it's supposed to do with this, drop the packet back out, VM picks it up, same thing happens on the return path for responses. And that looks a little bit like this. So this is on Alice. We're gonna go curl an IP address that isn't in Alice's subnet, and that's Bob. There's the other Bob instance as well. And now we're going to go into the MetaNet control panel, pull up the Alice tenant, pull up the router for Alice. And then you can see the link to Bob's router. So we're just gonna go hop right over, unpeer that link. That breaks the connection between the two routers, then switch back to the console, try and run that curl command again, and it'll fail because there's no route to host. We have several features that are already completed inside of MetaNet and we're working on more. This is a list of the stuff that we currently have. We can actually do layer three load balancing internally, both at the edge and internally. So if you wanna have a service internally that has many REST API endpoints, you don't have to send your packets all the way out to the internet to hit a load balancer to come back in, we can do that inside. We're working on putting in features that are going to allow layer four through seven support for that. We're also working on integrating with various third parties for security applications and VPNs. And our system is entirely REST-based. We have a plugin already in Quantum. It's upstream as a Folsom and it also works in Grizzly. Basically what our Quantum API plugin does is it just simply takes the API calls from Quantum and converts them to our API calls and sends them on their way. And if you have any questions, unfortunately we don't have much time, but feel free to come by our booth. We're back there next to Dell. And also if you have more interest in the whole software defined networking topic and how that works with OpenStack, there is a panel at 340 that our CEO, Dan Dimitrio, will be on along with a bunch of other folks in this industry. I welcome you to go swing by there. Thank you very much. Have a good day.