 Okay. Hi. My name is Justin Pettit. I work on the Open V-Switch project. I've been involved in Open V-Switch from the beginning as one of the original contributors. I also was one of the original authors of the OpenFlow specification and the OpenFlow reference implementation and a founding employee of Nacera. So I've been working on this stuff for quite a while. Oh, okay. Is that better? Okay. Good. All right. So Open V-Switch is a project that has a virtual switch that has a lot more features than the Linux bridge. It has things like NetFlow, IP Fix, Sflow, Span, Spanning. And then you can also implement fine grade ACLs and quass policies with OpenFlow. And then in addition to OpenFlow, there's also a central management protocol that can be used to configure the configuration database. It also supports port bonding, LACP, and various tunneling. And it works on a number of different platforms, including Linux and FreeBSD. It's written with an Apache 2 license, except for the kernel module, which is GPL because of obvious restrictions. And it's something that it's, a lot of people aren't aware of, is that Open V-Switch is used in a lot of hardware switches as their OpenFlow stack. So the architecture is done in such a way so it's easy to port to new soft switches, but also to hardware switches as well. So the project has picked up quite a bit of steam over time. Roughly, these are the different mailing lists that we have. The discuss is more for general users who have questions, like user-level questions, announces where we announce new versions of OVS. All the development happens on the mailing list, so the dev mailing list is where that happens. So we do the code reviews there, and then when things get committed, there's a get mailing list. So all of this stuff is visible on the OpenV-Switch.org mailing list. I saw yesterday that there was a OpenStuck user survey, and about 48% of deployments said that they use OpenV-Switch. So it's got quite a bit of support. We've had a number of contributors. This slide's a little bit old. I noticed this morning that VMware was on there, so I added that. But otherwise, there have been a lot more contributions since then. All right, so we'll get into the architecture of OVS. There are three main components, which are the parts in blue. And then there's up here in the upper part is the control cluster, so that could be an OpenFlow controller, a OVSDB manager. And so all of this, all of OVS is configurable remotely in addition to locally on the box. So OVSDB server is the component that has the configuration database. So this is stateful or information that will survive a reboot. It's configuration for things like bridges and interfaces. OVS vSwitchD is sort of what you really consider the core part of OVS that does all of the handling of flow setups and things. And then there's a kernel module that is just really a cache of recently seen traffic to improve performance. And so then this slide also has the different communication protocols that the different components use to talk to each other. So the configuration database is the first component I'll talk about. So it's OVSDB server. As I mentioned, it has the switch level configuration. So things like create these bridges, attach these interfaces to these bridges, create new, create tunnels and attach them to bridges. It also defines where you would do things like configure OVSDB and your OpenFlow controllers. As I mentioned that the state that's stored in the database is durable, so it's actually written to disk. It doesn't necessarily mean that the integration always does that. So for example, on Zen server on reboot, they blow away the database and they recreate it from their own Zen store to recreate configuration. But OVSDB server itself actually stores the configuration forever. And so this is an actual database. It has these properties like a database. One area of the implementation is that it's log based, meaning that rather than just store the state of the database, it has all the changes that have happened to the database, which makes it very useful for debugging because as someone makes changes to the database, you can see what those are and figure out how you got into that state. And I'll provide an example of that a little bit later. The protocol that OVSDB server speaks is OVSDB. And it's a protocol that is, it's JSON RPC based. It's in the process of being a informational RFC, the IETF. And so it's available out on the IETF website. So the database consists of a number of tables. I've highlighted here the main ones that you usually interact with. There are, there's a full entity diagram available in the OVSVSwitchD.conf.db man page. But these are kind of the core ones, as I mentioned. So the open V-switch, there is always an open V-switch table. It only ever has one entry. And then it contains pointers to things like bridges. So if you create a bridge, a new row will be added to the database. If you add a port to that bridge, there will be, you know, some, the port will be added to the interface table and then a, and then another one to the port and then to the bridge. And so it links all the way back. And so we have this extra layer here between ports and interfaces. So if you want to implement something like a bond, the bond is defined in the port and then the interface would contain the different physical interfaces. So if you had a bond zero, that would be in the port table. And then if you had it, if it consisted of ETH0 and ETH1, then there would be two different interface records that would, that the port would point to. Oh, and let's see. One thing I forgot to mention on this is this OVSDB server. At the bottom I mentioned the tools that can be used. There's a lot of utilities in OVS to configure the system and look at the state of the system. So at the bottom, these OVSVS Kettle, OVSDB Tool, OVSDB Client, OVS App Kettle are all things that can be used to interact with the database because sometimes it can be a little overwhelming when you first use OVS. So OVSVS Kettle configures OVS. So if you've used OVS, if you want to create a bridge, you've probably used this OVSVS Kettle AdBridge command. And what AdBridge, or what OVSVS Kettle is, is just a front end to the database. So it has a concept of how OVS works and then it translates those into OVSDB calls. So it's really just a high level interface for the database with some convenient things like to add bridges, add ports. It also has some low level commands for configuring the database like this OVSVS Kettle list will show all of the columns in the database and all of the rows. As I mentioned, for managing the database, there's this OVSDB Tool and the show log command will show you all of the changes that happen to the database. So here's an example of running the OVSDB Tool show log command. As you can see here, there are, it shows the different records. And so this is an example of Zen server adding a port to a bridge. And so the number is record three shows which change it was in the log, has the time of the change. And then there's this caller's comment which somebody who is making calls to the database can provide context about what's happening. So for example, if you're, this is doing a bunch of things, it's adding a, making sure that the bridge exists, it's adding the interface and then it's providing a bunch of information about the interface. And that could be very difficult to look at the actual changes to the database and determine what's happening. So this is something that whoever is making the call can provide this information. So by default, OVSVS Kettle provides the command that was handed to it so that you can know what it did. If you were writing your own thing that speaks the OVSDB protocol, I'd recommend doing the same thing. That's what we've had our controller team do so that you can provide context about what you did. And so then when you go back and you're trying to figure out, you can see, oh, these changes happened locally. These happened from the controller and this is what the controller was doing. It saves a lot of finger-pointing because you can actually see all the changes that happened. And then down here at the bottom, the database changes are actually the modifications that happened to the database itself. So now we'll talk about the forwarding path. So in the Linux bridge, it's a learning bridge, so its requirements are fairly low. So what it does is it receives a packet. It looks at the source MAC address. It possibly learns it. Then it looks at the destination address and then figures out where the port should go. And so all of that can be done in the kernel. It's just a fast lookup that happens there. Sorry, PowerPoint just crashed. So in OVS, the design is different. So what we wanted to do was make the kernel as simple as possible. This also has made porting a lot easier because since the kernel module is so small, if you wanted to port it to a new system, the amount of work that you have to do is a lot less because the complexity is in that OVS switch D process, which is much more portable. So what happens is the kernel module is really just a cache of recently seen traffic. So a packet will come in. It will arrive on the interface, be handed to the OVS kernel module. The kernel module will look up in its tables to see if it knows what to do with the packet. If it knows what to do, it applies the action. So if it's just a forward action, it would then send it out the appropriate interface. If it doesn't know what to do, then it sends it up to OVS V switch D, which has the complete configuration. And so there will be more information on that coming up here. So OVS V switch D is the core component of the system. It speaks up to the controllers with open flow. It speaks to the database server over the OVS DB protocol that we mentioned. It communicates with the kernel module over Netlink on Linux. And then it has an abstraction to deal with interfaces as well. OVS V switch D can support multiple bridges so that you could create a Br0, a Br1. It has a concept of that. It's not necessarily realized that way in the data path, but OVS is responsible for making sure that things are wired properly and forwarded properly. As I mentioned, the kernel module is relatively simple and so what we want to do is V switch D has all this complex information, possibly very complicated flow tables, and it needs to figure out an efficient way to represent that in the kernel. So what we did in older versions, which is just easier to explain right now, is that we always created an exact match entry. So that way when a packet arrives, all we had to do was pull the headers off, throw it against a hash table, and if the entry was found, we would forward the packet and if it wasn't found, then it gets sent up to user space. So the controller was, sorry, V switch D was responsible for pushing those flows down or for managing the flow table in the kernel. And so as I mentioned, the OVS V switch D implements things like mirroring and bonds and VLANs and also has the open flow tables. And then it's responsible for managing those data path flows. So the commands that you use to deal with OVS V switch D are typically OVS OF kettle and OVS app kettle. I'll get into those a little bit later. So the kernel module is what does the switching and tunneling and it's a fast cache as I mentioned of entries. So the flow table or the configuration of the switch can be very complicated. So we routinely see flow tables that have tens of thousands of entries in them. And so these, it's a very complicated and very expensive to figure out what to do with the packet on a, you know, when the flow table is that complicated. So V switch D makes a much simpler flow that it can push down into the kernel of the kernel is a, it's just essentially a cache. And so in order to do that, then we don't, we don't want to have overlapping flows because you just want to be able to quickly find the flow entry and know what to do with it as opposed to, you know, having priorities and figuring out what to do with the packet. And so as I mentioned, it's the flow entry looks a lot different than open flow. So it doesn't know anything about open flow. It doesn't do expiration. All of that stuff is handled by V switch D. We've really tried to make the kernel module as simple as possible, which allows, it gives us a lot of flexibility in terms of adding new features to V switch D and then it makes porting to other systems a lot more straightforward. And then the other main responsibility of the kernel module is to implement tunnels. So I've already covered the, most of this about how the user space processes packets. But what the, when a packet will miss in the kernel, it'll get sent up to user space for processing. We call the thing that has the open flow tables, the classifier. And as it's running through the classifier through all these open flow tables, because the, you can resubmit to other tables or back to the same table to figure out what to do with the packet, we, we accumulate the actions. So if it's, you know, modify this packet and then forward out this, this interface, then those all get accumulated and then that becomes, it figures out what that fast cache entry should be that gets pushed down into the kernel. As I mentioned in older versions of OVS, the, we only had a single flow table in the kernel, which was exact match. So that, which was very efficient to look up because we only had to do one look up. But it had the problem where if, let's say someone had configured a, just to do L2 learning, since all of those flows are exact match, if someone did a port scan, every one of those different ports would end up being a miss in the kernel and get sent up to user space. And so then the performance would, would take a big hit. So, and I'll get into a little bit, actually I'll just get into it now since it's already discussed it. So in version 1.11, we introduced a feature called Mega Flows, which has support for wild carding now. And so what we, what vSwitchD does is it looks, dynamically determines based on the configuration how much wild carding it can do. So for example, if you implement a policy that just says do L2 learning, then it will only match on the MAC addresses and the, and it needs to look at the ether type, but also the ether type. But then everything else will be wild carding. So that, that example that I gave earlier where someone would do a port scan, it no longer results in every one of those new ports being sent up to user space for processing. And with that, you know, if you just do normal L2 learning, we now see performance that's on par with the Linux bridge, which was a huge, huge improvement. I mean, it's orders of magnitude improvement over what we saw before Mega Flows. So for, as I mentioned, we, the kernel module also implements tunnels. Tunnels are, in OVS, typically are just configured as ports. So if you receive a packet, you know, if a packet comes in on a tunnel, it just shows up on a, it gets an open flow port number. And then it becomes part of the, the lookup for that flow. And if you want to send it out a tunnel, then you would just output it out that port, which would then put it in a tunnel. So the, the tunnels that are supported currently are GRE, VXlan, and Lisp, but there are other ones that are, that are being developed as well. The, in the kernel, then there's a command OVS DP Kettle show that will, because the interfaces are implemented in OVS, they're not actual interfaces like on Linux, you don't see them with an if can fake. So they're only visible if you run OVS DP Kettle show, which I'll provide an example of that later on, that, that'll, shows the configuration of the data path. So go through some of the utilities now. So the, for configuring OVS, there's a command called OVS OF Kettle, and that actually speaks open flow to the switch. Just, you know, like VS Kettle spoke OVS DB, the OVS DB protocol. OVS OF Kettle speaks the open flow protocol to, to the switch, but just is over a local socket typically. And so with OVS OF Kettle, you can do things like dump the flow table, add a flow, delete flows, all of that. We've added a number of extensions to open flow. So there's some examples here, but a lot of them, you know, we've been, we're pretty involved in the open flow process. And so what we ended up doing, being, becoming was sort of an incubator for new ideas. And the ones that were good have now been added to, to later versions of open flow. So for example, in older versions of open flow, there were a fixed number of fields that you could match on. But we, we started needing to be able to do things like match on the tunnel key. And there was just no way to do it without redefining the open flow specification. So we came up with an extensible match format that we called NXM. And then in version 1.2, it was adopted by the open flow networking foundation. So there are a lot of the ideas that have come from OVS now have started to drift over to open flow as well. I'll get into this thing about hidden flows later, but the flow table that you see through open flow isn't necessarily the flow table that, that is actually being maintained by the switch, but I'll get into that. So here's an example of running the OVS OF Kettle Show command for a particular bridge. And the reason I'm showing this is to, this is, these different rows are showing the, the different interfaces that are attached to a bridge. So I'm saying, show me the, show me what open flow knows about this bridge. And so in this example, it's saying that eth0 is on open flow one, and then there's all this state about the interface. But what I wanted to highlight is those open flow port numbers, because they're going to become important later on when we talk about the, the user space open flow versus the data path flows, because those numbers are not necessarily equal. Here's an example of running the OVS OF Kettle dump flows command. So by default, when you start OVS, there's just one flow entry, which just says, for any packet that arrives on any interface, do normal processing, where normal processing is doing L2 learning. So if you want to do, implement something more complicated than what you do is, first thing you do is delete that flow, and then you would push down the flow configuration that you want. But, and I'll provide an example later on, the slightly more complicated flow table. I'm not going to get into a lot of details here on the hidden flows, but I just wanted to mention it, because sometimes it can, it can trip you up as you're doing, doing development. So we have the, the one that we've typically run into issues with is in-band control. And so the idea with in-band is that you need to be able to have your switch communicate with an open flow controller. And, but you want to make sure that the controller can't push down flows that would prevent it from communicating with the switch anymore. So we have these higher priority rules, if in-band control is enabled, so that the, that always trump what the controller would try to push down. And so we implement them as higher priority than can be represented in open flow. And so we've been, we've been burned by this a few times because, you know, if we do something like, I think OpenStack does something similar where you have an integration bridge, and you have all these things plugged into it, and then you want to program what happens, you can have a packet come in, and it will do normal processing to the packet, and then you can introduce loops even though you've configured it. And so there's, there's ways that you can disable the in-band control. It's on by default, but there's a, there's a row in the database, or there's a configuration option in the database to disable that on a per bridge basis. But if you're trying to debug that sort of issue, you can run OVS app kettle bridge dump flows, which prints all of the flows in the flow table, even the ones that can't be seen normally through open flow that have that high priority. There's a file in the OVS distribution called internals that documents this in, in great detail. But it's, it's fairly subtle actually how we had to implement the in-band control. So the kernel data path, the, the OVS DP Kettle utility is used to communicate with the data path. You can see the configuration of the data path with the OVS DP Kettle show command. And then you can look at the flow table that OVS V Switch D had pushed down into the data path with OVS DP Kettle dump flows. And I'll, I'll provide examples of both of these. So here's a example of the OVS DP Kettle show. The, the, the output here is showing that the, that lookups is showing how many packets or what happened to the, when we were looking them up in the flow table. So the hit count is showing was there a match found in the data path. The missed count is saying, well we couldn't find an entry, so we sent it up to V Switch D. And then the lost count is showing that, that V Switch D never processed the packet. And so it ended up falling off of a, it wasn't able to be placed on a queue. And so that packet is lost forever. So ideally you want that to be, to be zero. But it's useful like, you know, if you're seeing flow set up performance issues, then you can look at that lost count. If you see that number going higher, then there's, there's issues with V Switch D not keeping up with the kernel module. So then the, in addition there are the different ports as I mentioned. So this is the same switch. So I showed with the open flow, the OBS OF Kettle show command. And so the port numbers are different. So in, on the OF Kettle, eth zero was on port one and eth one was on port two. We can't map those one to one because in the data path there is one, it's just has one flow table. There's just one data path regardless of how many bridges that you've configured. And so in open flow there are certain port numbers that map to things. And so we can't do a one to one relationship between ports in the user space and in the kernel. And so this becomes an issue when I show some debugging slides later on about how, about being able to, when you're, you have to be cognizant of if you're talking about an open flow or data path port. All right. So then here's an example of OBS DP kettle dump flow. So this is the, the flow table that's in the kernel. As I mentioned that prior to one dot 11, the flow entries were always exact match. And so you can see here the definition of the flow. So even though this user space was configured with just to do normal processing, we're still matching on, this is an ICMP, a ping packet. We're still matching on the ICMP type and ICMP code. In OBS one dot 11, when you run this command, there's a slash and then a mask. And so you can see here that the mask is showing zero for everything in the layer three addresses up. And so, so OBS app kettle is another utility that is used to invoke, to make changes to the, or to query the runtime of the switch. By default, if you don't specify a target, it communicates with OBS V switch D, but you can also change the target. So you could say dash T OBS DB server and, and talk to OBS DB server with the app kettle command. And so each OBS demon has a set of commands that are unique to it. And so if you run the OBS app kettle help command, it will show you what's available, but they're also documented in the OBS for any of the demons that it documents what the app kettle commands are that it supports. But all of them support the help a version which is showing the runtime version of the demon that's running and a, and then you can configure the, the logging level of the, the switch too, or of the, of the demon. So I'm going to provide a, show an example here of, of using the app kettle command for debugging a flow table. So as I mentioned that the, the flow table can become quite complex. So we routinely see systems that are running tens of thousands of open flow flows. And so when you have a packet can be very complicated to know what's going to happen because if you're going between lots of different tables, resubmitting to the same table, understanding what's happening can be very difficult. So I'm providing an example here of a kind of a, not a good implementation of a, a firewall. So in this, what I'm going to do here is black, block all TCP traffic except for that's going to port 80. So this first flow is saying that if the packet comes in and it's TCP resubmit it to 4,000, which means that you resubmit it back to the same open flow flow table, but you're changing metadata so that you can match it differently. So the, what the 4,000 is saying is change the ingress port so that when you look it up next time, that will be the ingress port of the traffic. So the next rule will be a little clear here where we're saying, you know, we have the, the flow it's at the same priority. It's going to, this time though we said it's port 4,000 because it's that resubmitted flow. And then if it's the destination port is 80, then we do normal processing, which means just forward the packet out. But we don't have any other rule for ingress port 4,000. So if a packet came in that was, it was TCP, it's going to get resubmitted with that ingress port of 4,000, but this time there's not going to be an entry found. And so what we do when there's no entry is we drop the packet. So we're going to be enforcing the, this policy enforcing that only port 80 is allowed by this one rule that allows port 80 and then anything else that was TCP gets dropped. Then for any other traffic, we have a lower priority rule that says, that essentially says if, because the higher priority matches TCP, if it's anything else, then it will just do normal processing. And then traffic that comes back on the others from the other interface, we just say do normal processing. This will be a little clearer in the example. So there's this utility, or there's a command supported in app kettle called OFPrototrace. And you can hand it one of the kernel flows. So when we did the OVS DP kettle dump flows, we had all of those exact match entries or starting in 1.11 wildcard entries, you can hand them to OFPrototrace and it'll tell you what happened to the packet. So in this example, you know, I put the OVS app kettle OFPrototrace and then quotes a description of the flow. And there's a bunch of information here. But the one that's, but what we're looking at is the stuff where there's the red writing. So we, so it was an ICMP packet. So it was a ping. And the rule that we matched is that the applied open flow rule, which was that lower priority 90 rule, and then they had the set of actions that were normal. And so that was the kind of the user space processing that we did. And then down in that, after that blank line is the configuration of the data path. So there's a description of the flow that we would push down into the kernel and what we did with the packet. So in this case, you know, when you have a ping, you can see that that was the rule that we matched. And then this was the set of actions. This is what we would implement, actually, or what would happen to the packet. Here's an example of the, of TCP. And so this is a traffic that's going to port 80. And so the first, in that first block, you'll see that we match the rule that says if it's priority 100 in TCP and ingress, or in the ingress port was one, then the action is to resubmit it to 4,000. And then in that indented block, we then do another lookup and then we match the flow entry that said, that said if it's TCP port 80, then just do normal processing. And then after that blank line, you can see what would happen in the data path, which is that we apply the, that we just send it out port three. And this is where I was talking about that the open flow and the data path ports can be kind of confusing. So in this example, the ingress port is one, which is the open flow rule. But then the data path action is, we're sending it out port three, which if you remember was ETH one, which in open flow is port two. So it's, you have to be, once again, you have to be aware of if you're looking at the open flow or the data path when looking at the ingress ports. And so then here's an example where we send traffic, TCP traffic that's not to port 80. So in this example, I sent a packet that is going to port 100. And we, once again, we match that it's TCP so that it gets resubmitted with that ingress port 4,000. But this time we didn't find a flow entry. And so then we automatically just drop the packet. And so then you can see that the data path action was dropped. So this has been one of the, the utilities that our controller team really likes using is being able to figure out what happens because, you know, they may have, you know, 20 different resubmits and, you know, tens of thousands of entries and then figuring out what's going to happen on a packet, why packet got dropped, or why it got forwarded out, the interface can be very complicated. And so O of photo trace is very useful for that. All right. And then logging, there's a OVS app kettle command for configuring logging, as I mentioned. And so the, you can, by default, as I mentioned, it will change the logging of OVS vSwitch D, but you can change the target and chain, and do it for like OVS DB server or any other daemon that's running. So the, and then the log target, or the logs for the different components are going to change from, obviously, from like OVS vSwitch D to OVS DB server. So they'll all be different depending on when you do the OVS app kettle vlog list or set. So log files, by default, will be stored in var log open vSwitch. The usual ones that you would look at are in OVS vSwitch D.log and OVS DB server.log. We also store some information just in the system log files, which will typically be in var log messages. And then the configuration database, as I showed, is essentially a log as well, because you can run that show log command and then see what happened to the configuration, how it got into the state that it's in. And so then the last thing is just, if there are any questions, I'll be around later, or if we can open up the floor in just a minute, but also the documentation tends to be very good in OVS. We try to keep it up to date. We also have a FAC, so please check the FAC before you send a question to the mailing list, and then, you know, if all of that fails, then you can just send a message to the discussit open vSwitch.org mailing list and ask your question. We try to be pretty responsive. All right, that was all I had. I don't know if any questions. All right, thanks. I'll be around for a minute. Let's go.