 Hi, everyone. My name's Tim. I'm from Midecura, and I'll be talking to you a little bit today about our SDN products for doing completely virtualized distributed networks. So I'm going to give you a little bit of an overview today on how we work and how we function, and then go through a little bit of a use case and open it up for questions afterwards. So if any of you have actually gone through the setup of OpenStack, and I'm sure you probably have, you realize that one of the most difficult parts of doing that is setting up the networking. Neutron gives you a fairly good way of doing something simple, but anything more complex than that, and it tends to get a little dicey. So where we come in is we virtualize layer networks two through four and provide a completely virtualized topology that's independent of the physical topology of your network. We kind of colloquially call this the IP bus. As long as there is IP connectivity between all of your hypervisors, we don't care at all what the topology of your network or your connection looks like. Everything else is completely virtualized between all of our agents running on your hypervisors. So once we install our agents on each one of your hypervisors, and these are very thin agents, they sit on the hypervisor itself, as well as your border gateways to do BGP, we connect all of these virtually to our back end, which does Cassandra for reads and writes and stores the virtual and physical topology of the network in ZooKeeper. Our agent also handles the BGP protocol so that we can do external network connectivity, and also by using BGP we can do multi-data center pathing and things like load balancing inside of our virtual framework, so you don't have to do that on your own. So the first thing that we do is install our agent on all of the hypervisors. It's a very thin agent. It operates only once on each hypervisor for any of the VMs that are operating there, and there's also the same agent installed on any nodes that connect to the external network to perform BGP and topology management for those nodes. So once we do this, we can create a virtual topology inside of our own infrastructure that's independent, again, of your physical network setup, and you can design this through our CP or our CLI in any manner that you see fit. It doesn't have to mirror the physical network in any way. So let's talk a little bit about what happens once we've set up this virtual network, and we actually have to pass packets through it, because that's the million-dollar question here. So the first thing that happens is when a new packet flow is established, our agent will calculate the routes and perform topology management on the actual physical network and determine what the best path flow for that IP packet is. That's done on an agent basis, so it's distributed across all of the hypervisors, and you don't have to worry about any sort of centralized management or load on any particular cluster. Once that's done, that virtual topology is stored inside of our database for future reference and can be calculated against every time we need to make a change to the virtual network or something happens in the physical network. Once the initial packet is calculated, we can install this path inside of the kernel in flow tables through OVS so that every subsequent packet that goes through the same flow doesn't have to go through our agent. Again, it actually just directly goes to the kernel so that it can go straight on the fast path. So this means that every subsequent packet is essentially going at near line speed and doesn't actually touch our agent in any way. The initial overhead that's done on the initial packet calculation is gone completely because we aren't touching databases and we aren't doing any initial calculations on subsequent flows. So this is all well and good, but let's talk a little bit about a simple use case of how this can help you. So if we have, say, a very simple open stack setup where we have two hypervisors and each one is running a pair of VMs, let's see what happens if we make some changes to the virtual network inside of MEDONET, independent of the physical network, and how that is dynamically changed and reflected in the actual network itself. So the first thing that we need to do is have a standard setup. In this case, we just have two hypervisors layer two running a pair of VMs each. We have IP connectivity between all of these nodes. We don't care how, as long as they're connected to each other, again, we take care of the virtual management inside our own software. So in this case, let's snip one of these connections virtually. In this case, we're just going to do something simple and kill off a virtual router in between these two interfaces and see how that affects the traffic in real time. The virtual topology itself is stored, again, inside of our database, so it can be as complex as you want without having to affect the physical network. And just for this demo, we'll snip it at a very low level just to see what happens. So as you can see, we're on one of the VMs here. And we have connectivity, so we can ping between different VMs on different hosts. This is an example of our CP, by the way, so that we can manage visually our network routing and topology. So what we're going to do is go in one of them inside the tenant router for this particular hypervisor and kill off one of the links by just unlinking the two ports. And you'll see in real time that that network is destroyed when we do this, so that pings will not reach outside of that particular VM. So you can see pings have stopped. And then go back in, and we'll relink the two through a virtual router that we had established before. And dynamically, the pings will reestablish themselves. So this could happen, say, in the virtual network if we're rearranging things to make a new topology, or it could happen in the physical network if we lose a connection between two different hypervisors, and we have to route around that in virtual networks. So let's link these back together again. And if we go back to our VM, you can see that pings are reestablished, and we have our networking setup. Without having to go to the data center or rearrange any of our physical switches in order to reestablish a new framework, new topology, or even just manage the network that we already have. So our setup is pretty simple. Our northbound API is Neutron, and we speak directly to standard OVS kernel module for installing flows. All of this is handled by the agents, as I said, which are installed each on each hypervisor. So we have multi-failover capacity, as well as our database back end, which you can scale for high availability if you'd like. We provide an API server so that you can make calls from your own software, as well as a CLI, and as you saw during the demo, CP, independent of Horizon, so that you can manage complicated things like NAT rules, firewall tables, any sort of complicated security groups that you want to do to give you more fine-grained control over your network. So that's a little quick overview of how we do layer two through four virtual management. And if you have any questions, feel free to ask me now, or you can come by our booth right at the opening of the door over there and give us a whirl. Any questions?