 So first, I'll just explain what exactly OpenFlow is. So I'm sure you all know that Neutron uses OpenFlow these days with OpenVSwitch, but it's based on kind of a lower level protocol called OpenFlow, which is basically an open protocol for software to find networking. It's managed by the Open Networking Foundation. It kind of started maybe four or five years ago. And since then, a lot of vendors have shipped support for it in physical switches and other networking gear. There's also a software implementation, which is a Linux kernel module, so that you don't quite get line rate hardware performance, but it's pretty good. So OpenFlow was motivated by some researchers at Stanford. They were doing all kinds of experimental networking stuff with security, QoS, routing protocols, peer-to-peer technologies, things like that. And they found that it was really hard to do with existing networking gear, because usually these switches were kind of closed and proprietary. They wanted to run experiments at line rate, at high speed, with realistic traffic volumes. And they found that it was really hard to modify these networks programmatically. So I mean, certain vendors were already kind of shipping software APIs to their gear, but they were heterogeneous, and they didn't always support everything that these researchers found that they wanted to do. So they kind of stepped back and thought about how to solve this problem. And what they ended up coming up with was, well, there's a lot of commonality in all of this gear. They all have these kind of flow control tables. They all do forwarding in a similar way and packet inspection and all of that. So what if we can just make a common interface so that vendors can expose the functionality of their equipment programmatically without having to open up the internals in their secret sauce? So what they came up with was this. So the basic idea is to separate the control layer and the forwarding layer of these switches so that all of the stuff in hardware kind of below the line there is kind of the flow control layer. But above that is the control layer that kind of sets up the flows and decides which packets go where. And so the idea is that the hardware checks its flow table when it receives an incoming packet. And it can have matching rules against the packet to decide which flow table entry applies. And if it finds that it doesn't know what to do with the packet, then it sends it off over SSL via this binary protocol over to whatever IP address you specify. And you can give it more than one so that you can have high availability so that if it can't talk to its controller, it will try the next one. And then on the controller end, you can basically do whatever you want. You just have some software running that speaks this protocol. It can do all kinds of high level things. What it exposes to the controller are basically a lot of most transport layer and L3 attributes. So you've got source and destination, MAC and IP address if applicable, and VLAN tags, and a number of other kind of packet headers. And your controller can decide based on that, OK, this is what I want you to do with packets originating at this source MAC, for example. And so your controller can then push flows back through the secure channel into the flow table and say, basically, next time you see a packet with this destination MAC, send it out this switch port. And you can specify TTLs on it. You can do wild cards on certain headers. And it just gives you ultimate flexibility while still giving you the performance that you get with the traditional switch. I should have turned off my screensaver. So yeah, in the last few years, a number of vendors have put out switches with support for this. HP, Cisco, Big Switch, Larch, all of the big equipment manufacturers are supporting it these days. But if you want to just get started playing with it on your own, there's also a plug-in for the OpenWRT, like Linksys router operating system. So you can get started very easily on hardware just with a cheap off-the-shelf consumer grade router. Besides the hardware implementations, there's the software implementation that most people in this community are probably familiar with, which is OpenV Switch. It ships also a kernel module so that you can get better performance out of it. It emulates basically what the hardware switches do. And it's actually pretty fast with the kernel module, since you're not switching back and forth between user space and kernel space all the time. So a lot of you are probably familiar with diagrams like this, the way that OpenStack uses OpenFlow can get pretty gnarly. A lot of this stuff is still managed by Nova, these virtual ethernet pairs, like you see kind of in the blue and orange there. But with the latest developments in Neutron, they're all kind of plugged together with OpenV Switch bridges. And things like VLAN tag translation and things like that are all handled with OpenFlow flow rules. If you want to dig in and see how Neutron is using OpenFlow under the covers, it's actually pretty straightforward. There's these couple of directories in the Neutron source repo. It's very straightforward. You can see that it's basically just shelling out to some OpenV Switch utilities to enter these flows based on events such as network creation or adding a VM to a network or things like that. So that's all well and good. You can learn how OpenStack makes use of all this stuff. But for me, I like to get kind of hands on with things to really understand them. And playing with things in Neutron comes along with a lot of context in state and makes it really hard to just kind of experiment just because of all the things that are already there that you have to work with. So I set out to find a way to just kind of play with OpenFlow at a low level. And basically, there's a lot of great work out there that enables you to do just that. So basically, the basic idea is to implement a controller. It's just like in that diagram before where the Switch was calling out over SSL to the controller. To get started, you just write a controller. It's just a piece of software that speaks this protocol. You don't have to go off and reinvent parsers in an implementation for this binary protocol yourself. There's a number of client libraries out there for your language of choice. There's one for C called Knox, a re-implementation of that in Python called Knox. There's libraries for Ruby and other languages as well. So a controller is really simple. This example is for Pox, which is the Python one. This is kind of pseudo code. It's not filled in. But basically, you just initialize it. And then when the Switch encounters a packet that it doesn't know what to do with, it will send it off to your Python code to a method like this. And you can examine the headers, the source and destination MAC, all of these things. Decide in your code what to do with it, and then push that flow down into the controller. You don't have to do this on demand either. Like OpenStack, for example, already has a database of the network topology and what traffic needs to go where. So at startup, you can just proactively access your database, figure out your existing topology, and push flows for all of those network components directly into the Switch so that you don't have the first packet latency of calling back off to this thing so that the Switch will already be pre-populated with all of your rules. And also, if you want to just kind of play with this in a sandbox way, you might not want to have to deal with having a bunch of physical servers or firing up a bunch of Cloud VMs. So there's this thing called Mininet, which makes use of Linux namespaces, which is very good for that. So Linux network namespacing is something that was kind of pushed forward by OpenVZ and LXC. If you're not familiar, it basically is a way to create additional virtual network stacks that are namespaced within the kernel so that you have one running Linux system. And in a very lightweight way, you can just fire up completely independent network stacks and execute processes in them so that from the process perspective, they're on a different machine from the host system. You can fire up tons of them. I have a demo here in a second that you can run thousands of these things in a normal OpenStack KVM VM. So if you want to kind of test, if you want to experiment with things and test topologies of hundreds or thousands of nodes, you can do all of that inside of a single VM. And the performance is totally reasonable. So Mininet, again, allows you to programmatically set all of this stuff up. It's a Python library. And basically you can just have lines of code, one line of code to create a switch. You can create more and connect them. You can create hosts, which will fire up separate network namespaces. And so in just a few lines of code, you can create really complex network topologies to test out whatever you want to play with. So yeah, I guess I'll show some of this stuff. So I'll get this on screen here. So here you can see some Python code that sets up one of these Mininets. It's just adding a bunch of hosts, connecting them together into some switches. To fire this up, we can just run a script. I type it right. So here we've just fired up a bunch of network namespaces and all of these simulated hosts. It has some nice sort of syntactical magic where it sets up DNS entries for all of these things. So to run something in host1's namespace, you can just say host1 and run a command, say ping h2. Nothing will happen now because we haven't started our controller. So here we have some Python code that just basically implements a Layer 3 router. It basically listens for packets that are coming in from OpenB switch that it doesn't know what to do with yet. It emulates the Layer 3 router by basically doing ARP if it doesn't know about this thing. It maintains an ARP cache in memory in this Python process and pushes down flows that, OK, I saw this Mac come in this input port. So I'll check my ARP cache. If I don't find it, I'll send out an ARP on all of the other ports until I find this Mac address. And then tells the switch, OK, anything bound for this destination Mac, send it out port x. So we can fire that up. And now OpenB switch will talk to this thing. And if we rerun our ping, we should get responses, which is great. And we can test connectivity between all of the hosts. Over here on the right side, you can see some kind of debug statements that show what it's doing, which is basically what I just described, where we send out ARPs to find who we're looking for and push in and close into the switch. You can easily imagine doing more complicated things like this. You have access to TCP QoS bits, VLAN tags. People are using these tools to experiment with even things like alternate Layer 3 protocols, alternatives to IP. Really, the sky's the limit with this stuff. So that's really all I have. The sandbox stuff with Mininet and the pox router and all of that stuff is up on GitHub. If anybody wants to play with it, it's really easy to get set up with. I'd encourage anybody that's interested to learn more to read the OpenFlow white paper. It's very straightforward. It's a really great example of a clean, small interface that's just super flexible. Yeah, anybody have any questions?