 Nice to see you again. Hi, everybody. So we got a lot of SDN fans out there, right? Networking guys, little DevOps guys. We're all going to have some fun. So my name is Tim. I'm from Mitochora. We're the makers of MitoNet, one of the SDN plug-in providers for OpenStack. Basically, we fix your neutron. But that's not what we're going to talk about today. We're going to talk about cool, shiny buttons and how to do some troubleshooting on your overlay. So one of the problems with having an overlay solution or an SDN solution is that you're virtualizing everything. So you can't necessarily use the same tools that you used in the good old days to pack a capture and trace route and figure out what's going on and where you're dropping packets and who mangled a switch somewhere. So that's kind of a problem. And up until now, it's really been an unsolved problem. You can get various degrees of visibility into your network using the old-fashioned tools. But if you're particularly using one of the other vendors that sort of locks everything down, you may not even be able to get basic packet capture from your overlay network. On top of that, there's been a lot of call for a little bit more visibility into the network for other reasons. Billing, metrics, maybe you want to charge based on bandwidth usage, or you just want to make sure that no one's spiking your network. Well, OK. So we here at MuteCorrex sort of focused our last round of updates on some cool new developments. We basically developed all of the tools to give you insight into your virtual topology and also make it easier for you to write your own tools to do that. So let me introduce you to MitoNet Manager. This is our dashboard with all of our fancy business in it. And this is part of our enterprise product. So when you get MitoNet and you get it from us, you get this as well. This has been around for a little while, but you'll see we added some new features to it and sort of prettied everything up. But just by way of an FYI, you can also get these tools in the command line, because I know all of you guys don't really care about shiny stuff. OK. So one of the problems that is sort of a basic problem is that we really don't have any insight into what our virtual topology is, how the packets are moving over it, and the ability to diagnose what's going on at sort of each hop level on the virtual topology. That might be because those hops don't actually exist, but it might also just be because we're encapsulating stuff and it's not easy to see. So the first sort of basic thing that we did is we gave you real-time metrics on all of the virtual objects since we have an object-based system. Essentially, you can sit on top of any object and get all of the information that you want out of it. In this case, what we're showing here is our dashboard for our provider router object. This is the aggregation unit for everything external to the cloud. And you can see that we have a real-time histogram, if this wasn't a screen capture, of the traffic coming in and out. We can see transmission packets, all that fun stuff. You can also see down on the bottom that we split it out based on ports, both logical and physical. So this particular box has two physical ports. It's actually a VM, but it would have two physical ports if it were a real box. And you can see we split out the traffic based on that as well, so you can get a little bit of visibility there. You can also click on all these objects and get more detailed information, but it's just nice to have that kind of thing. It's all real-time. Basically, what we're doing is we're pulling our counters and metrics from our database system, and we're displaying them as a real-time histogram. Essentially, you're manipulating the data that we already stored to get interesting insights into what's going on. But that's sort of a basic tool. What else do we have? OK, so one of the really annoying things about debugging stuff is what's outside your network. We try and keep everything as simple as possible, but we use a BGP peering mechanism to do upstream. It's very simple to set up, but it's sort of hard to diagnose. And we found that one of the biggest issues in installation was we got everything set up, but suddenly the BGP peer on the upstream router was not advertising anymore, something like that. So what we added was the ability to look at the BGP session objects. You can get things like what advertising routes you're getting upstream, what you're getting from your peer, who's assigned where, what the prefixes are, all that fun stuff. You can also edit and interact with these objects. So you can inject more routes manually if you want to, see who's up, who's down, what the fail over time is, all that fun stuff. And you can see it directly from the CLI, but you can also click on shiny buttons and see it here. OK, so what does this sort of look like, right? Like I said, we have an object model. That applies to all the ports. It all applies to all the virtual objects, like tenant networks and VMs. But basically, you can interact with the data that we store directly to do our calculations. So here's an example of what it would look like if we want to edit some of the routes that is getting injected into our VMs from our upstream BGP peer. You can see that some of them are learned. Some of them are coming from us. That would be the default routes. But we can also actually go in here and edit some of these objects to inject new routes, give them extra weight, remove routes that we think are maybe rogue or suspicious. And this is all updated in real time across all of our active gateways. So that's BGP. But what about diagnosing the actual virtual network? Well, that's where our new trace routing feature comes in. We have a new virtual trace route that lets you walk the packet process over the top of the overlay, so essentially as the overlay looks. And you can diagnose what's going on, where each of the hops on the packet were, what that transformation on the hop was, and see where you're losing traffic, where something went wrong, something like that. So essentially what you can do is create trace conditions through any one of our endpoints, CLI, or in this case, the dashboard, and set conditions. Here is just a simple condition. It's any IP address that's going to the Google DNS servers. So hopefully we can get something out of that. But it gives you an idea of the sort of the trace triggers that you can set. And set essentially arbitrary triggers. You can run them when you want. And then we will match those conditions against the existing traffic in your network and give you the data dump of that flow. So here's kind of what it looks like. This is a particular trace request. It's the one that we just saw earlier. You can see that it triggered on a couple of occasions. And we have some data entries there we can look at. So you don't have to run them manually. You can just set them to run, and they will grab matching data as they go. But this is what the data looks like. You can see that it's each hop on the network. It's all the logical objects. So we have things like going from the compute node on the beginning, how it walks through our flow creation process, what happens when it comes off of our flow creation, and essentially goes to the next hop, which in this case is going to be the gateway because it's an outbound network. You can see that we get some information from the insight into the tunnel, what the tunnel key is, where that is, and you can go look it up. These tend to get quite long, but we wanted to give you detailed visibility onto all the network objects. So it's essentially, you can think of it as sort of a trace route on steroids, but for the overlay network. And I just kind of wanted to show you if this will loop for a second, that this is all live. We have our dashboard running at our demo, or at our booth as a demo. We have this up and running for anybody who wants to use it. We have it containerized, so essentially you just point it at our API, and you're good to go. It's all stateless, so it doesn't take any overhead and we're all doing this in real time. You can also extend all of this stuff and get a time series graph or historical data if you're combining something like Elk, and we can feed that data directly into, or from our API directly into some sort of log stash, whatever keeper you want, and you can analyze that historical data to see what's going on. So it isn't very exciting, but you'll kind of see that on an update cycle, we get a new refresh. You can set the cycle time and how fine-grained you want your visibility to be, but for every logical object, there is this display, right? So that can be a VM, but it can also be a port, or it can be a step in a bridge. So it doesn't have to be a set thing on the network. It can be any step in the process. So I figured there would be some questions about what we're gonna be doing in the future, or maybe how this was useful, so I figured I'd open it up for the questions for you guys. Any questions? That's exciting. No, so this is reading the data directly. The question, sorry, was, do we have to install an agent for this? And no, this is reading the data directly out of our cold data store. We have a data cluster made up of ZooKeeper and Cassandra. So we're essentially reading data out that we already had available. But no, it's all stateless. In fact, for ease of deployment, we stuck all this stuff into a container and you can just download. Any other questions? Yeah, we actually have a customer. I'm not sure I can do names. But yeah, we have a customer who's using that right now. It's, we use a scraper system to do the injection in real time. But mostly the use case is for historical security audits. In their case, they also wanted some bandwidth analysis for billing. So because you can sit on each object, you don't have to say something like, I'm going to charge for all the traffic that's coming out of this VM. You can say I'm going to charge for all the traffic that's outbound from this port, that's over this bridge, or that's bound for another part of this network. So a good use case would just be simple billing. It's also fed into for our cloud, it's also fed into a monitoring system for spike alerts and things like that. Anybody else? Cool. So come stop by, talk to us. You can see this stuff and actually click it. These are only a few features you may have noticed. There's a lot more stuff in the dashboard, including insight into all of the objects. We also have VTEP, health monitors, load balancing, all that fun stuff. So if you stop by the booth and we're right in the back over there, you can take a look and click around. Thanks guys.