 I will try my best. So my name is Mark McLean. I am the former PTO of Neutron. So I want to talk a little bit about deploying Neutron. A little bit is talk about where we're going today. For some, Neutron can be this long winding road. You don't know what's behind the next hill. So we really take a look. First of all, we really want to ask a few basic questions. Multitenancy, it's an odd question to ask in cloud deployments, but whether you're big or small, sometimes you need to really answer the question, are you hot? Do you have high multi-tenancy? You have lots of different individual either users, applications you want to isolate from each other, or maybe you're low. Some of those decisions do play into when we get to a couple points down the road. Also, do you need to isolate tenants? In some deployments, you don't need isolation. You can make your deployment simpler and generally just skip over it. But even if you do trust your tenants, sometimes you do want to isolate them. One of the biggest benefits you can do is you can limit the amount of L2 traffic that different applications see. And some of the deployments in some of the places I've worked, we had very low multi-tenancy. We trusted everybody, but one of the biggest complaints we got from application developers was they were seeing too much broadcast traffic from other applications in the data center. And they just found it annoying when they were troubleshooting. So one of the other questions a lot of times often deal with when you're deploying is what kind of isolation do you want? The lowest hanging fruit is to use VLAN. So it's 802.1Q. It's VLAN tagging. It's supported in about every cheap NIC you can find. It's wildly supported. There are a limited number of VLANs. You get about 4,000-ish more technically, but most switches kind of start having problems when you get even to like 3,000 because the default configs will limit the ranges available VLANs within the switch. One of the other interesting things about that is if you have over 4,000 tenants, this is obviously a problem and then you should have to start doing tricks with trunking, programming, and switches. Also, the underlay has to support the topology. So typically when you're running VLANs, you end up with a very large L2 domain. One of the problems that creates is you have a problem with the number of instances you can boot. As you start getting the MAC tables of the switches can start overflowing and then you start running into problems because depending on your vendor, some of your hardware may only allow 50,000 arc bentries in the switch. Some may allow 100,000 arc bentries. Some may tell you they will allow 100,000 arc bentries until you enable this really awesome management feature that then all of a sudden takes up all the available space. Alternative week, we can do GRE or VXLAN tunnels. Basically, it's L2 and capped in Layer 3, Layer 4. It's routable. You can basically go with what you typically find in current data center design when you find different cloth fabrics. It's easy to expand because you can add racks at a time. It's immediately routable. You can move instances around because the tunnels kind of make the location instance somewhat transparent. There's a little bit of increased packet overhead. When you add the VLAN targets, relatively small GRE and VXLAN both have additional header information you have to put. It's easier to grow to deployment because you're not having a very large L2 domain. It's basically you can scale up each one of your routes, routing domains, typically what you'll find with people deploying this style, what you'll see top rack switching with a full mesh at the top. Also, if you have very large deployments, one of the things to consider is L2 population. As you start spending up tunnels, you still have the same problem we all have with traditional networking anyway, which is how do you find instances? So if you have hypervisors A, B, C, and D, and you spin up two instances on hypervisor A and D, as you have really large, and if the instance on A needs to talk to D, it's got to actually flood and discover it on all the hypervisors. Now, some of the options and choices will do what's called L2 population for some of the controllers, and by knowing the entire logical state of the network, you can either be pre-populating the affording base, you can actually say where to send any broadcast traffic or unknown traffic. What that does is reduces the amount of traffic within your data center. So also another one of the considerations is for kind of layer three. Generally with layer three and neutron, it really depends on your provider because they're all over the place, but I'll talk a little bit about what the reference implementation looks like. So when you take a look at the reference, you typically have your hypervisors, you have a network node, it has a core. The network node, typically you'll have more than one of these. In the latest release, you can actually run HA pairs. So within that network node, you have a network namespace, which basically acts as a little virtual router. It does simple forwarding, also does NAT for any floating IPs. One of the problems with that network node is to say if you have a really noisy neighbor, and so we start up our VM, let's say it's getting lots of traffic or even worse, you're running a public cloud and they're actively DDoSing on the outside, you can end up where the point where the VM starts generating so much traffic, it saturates your network node all the way up, blocking basically traffic for anybody else. So one of the things that's coming and it's available in Juno, but as a deployer, I wouldn't recommend running the Juno version, mainly because it needs more testing. It's what's known as Distributable Virtual Router. And what this does is we actually create little routing instances in the open source version on each of the hypervisors and route directly to the core of your network. If you have a floating IP associated with the instance, the floating IP is kind of the key point. In this case it does, so when people talk about DVR support or not, basically it's doing routing directly from the hypervisor. So when we actually talk about the deployment options, we have several drivers and plugins. We have 19 of them at all. They run, again, in terms of features and support. Some are open, some are not, some. So when you take a look at, there's lots of choice you talk to everybody, but basically what you wanna do is when you start considering is considerations. Some of the deployments have a central network controller. It's basically a cluster system that's a lot of times are based on Cassandra Hadoop or ZooKeeper. You run that cluster and then what Neutron is doing is Neutron is an API fronting it. There's a few of the solutions which do not have a central cluster and rely on Neutron's database. So in terms of operational complexity, you have to decide whether or not you want to have multiple systems that are proxy for each other or want to have an internal one. There's also scaling limits. The number of hypervisors that the different solutions support, the number of ports that they support is wildly different. Some of that comes into the calculations necessary if you're, say, going and doing a full open flow, central control, the more ports you add, the more calculations it takes to write all the appropriate flow rules everywhere. Also, in terms of scaling limits, one of the problems you have is as you get super large, what can happen sometimes is you can actually have more flows and will fit in the switch tables. So as packets traverse the topology that will actually get bumped up into user space and even worse, sometimes you may not have the information local in the user space version of the switch and that packet will actually get kicked up all the way to the central controller which has a very high latency for the first packet. So when you're going through and evaluating these, that's one of the things to check is what the latency is on the first packet. In some cases that overhead's acceptable and for other cases, you may want to consider a different solution. IPv6, it's been around for 20 years. If you're making a new deployment without it, you're messing up, you know. Go ahead, put IPv6, build it in. Also levels of testing, as Michael hit on the level of testing, while all the neutron drivers do have third-party CI that are in the tree, there's varying levels. Some will do very basic checks, some do very deep checks. Also, some of them will only support, sadly some of the drivers will only support IPv4 and not IPv6. So check into the level of testing and also into the level of support if it's a proprietary solution to level support or even some of the level support for the open source bits. Some have better documentation than others, some have wider, bigger communities than others. So if we were to dive in specifically into the open source options, so with the Neutron, we have what's called Modular Layer 2. Basically, we took our plugin interface and if you think about Neutron plugins, it's just like having an engine of a car. You get one plugin. Is ML 2 allowed you to have multiple drivers for that engine? And so OVS is OpenVSwitch. It's basically a user space switch that's available widely in every distro currently, or alternative Linux bridge, which is basically a switch on the bridge that's using just basically traditional kernel constructs. OVS will support a protocol known as OVSDB. OVS also supports OpenFlow, which some people like, some people don't like. The simplicity of running Linux bridge allows you to use all the standard IP route 2 utils that everybody's used to and loves, as well as there's others such as OpenContrail, which is like a layer three based system. It's originally was created by a division at Juniper. It's been spun out. There's multiple companies support it. Open Daylight has a foundation with multiple members that support Open Daylight. It's a Java based. The interesting thing about Open Daylight is it's pluggable as well. So what you end up with is Neutron itself being pluggable into a very pluggable backend. So it's really hard to say Open Daylight is a specific implementation because of the number of choices. And then lastly, you have Ryu, which is an OpenFlow based controller framework. That supports very, in their most current work is a very distributed local, basically using a little mini controllers that they're able to distribute on each of the hypervisors. So I know I went through a bunch of different choices. The architecture guide goes through a lot, even in more detail than I could cover in 20 minutes of, you know, Mac table entries, different options for deployers, how you want to choose, as well as the admin guide of deploying these things. With Neutron, it's really difficult to, just because there's so many options and Neutron's really a thin wrapper in terms of API. So with that, questions? None? Oh, yes. You were 15. Talking about IPv6 there, I'm new to OpenStack and when I first started looking at the networking side of things, I was a little taken aback and the network engineer in me was a bit horrified at all the NAT and stuff. How does it change when you're using IPv6 which doesn't map very well onto the NAT model? Like how does floating IPs have a different semantics, that sort of thing? Yeah, so floating IPs, in OpenStack currently right now, floating IPs are only supported for IPv4. Many of us have very strongly opinionated that IPv6, you should have direct routing to the host. We don't want to add NAT into the data path. So currently what you would do is, so our IPv6 support actually comes in several flavors. You can delegate a prefix to the network, you can use Slack, which is stateless auto configuration. We also support DCP v6 in both forms so you get both stateful, which is very close to what you would find in traditional like v4 DHCP services and then DCP stateless where you get a router advertisement that basically auto-generates your IP address but you can also tell the client to make certain requested DCP server for additional optional fields. So most of that support for the ACP services was primarily driven by Comcast, which is at least in the US has been very huge proponent of making v6 as fast as possible. It's close to the spec in what you would actually want to deploy as possible for their internal use cases. So that's why we didn't want to support NAT and even for the foreseeable future, I don't see the Neutron team adding v6 NAT even though we do get requests for it. So flooding IPs will exist for v4, but in terms of v6, you can actually, there's work to add multiple prefixes so that you can renumber and have transitional periods but because so much v6 range is available that most deployers should be able to get a large enough block that they can have fully publicly available IPs. I just want to pick on that a bit further. So the most common use case I see for floating IPs is I want to have something in DNS, but I might want to pull the instance out from underneath and use it for different instance, right? So how does that work in a v6 world without floating IPs? So one of the things we're also working on is extracting out the IPM system in Neutron in Kilo so that you can actually preserve a static allocation for it, you preserve the IP address itself as public in your network and then you can assign it to ports and physically move that address without having it return to the pool between ports. Okay. Additionally, we're adding support with the rewrite of load balancing so you can have your VIP because typically most people want a well done IPs in a lot of cases associated with their load balancer. Although we do see it with single instances as well. Just would like to hear your opinion on OpenV switch in terms of scalability and also how stable it is in particularly recent versions of OpenV switch running with recent versions of the kernel. So there are several deployments right now that are running OpenV switch, very large deployments that are running OpenV switch so it's relatively stable. In terms of scalability, it really depends on what you have driving OpenV switch. We have the reference implementation which doesn't scale to super large nodes, number of nodes that can drive OpenV switch. Open Daylight via OVSDV can drive OpenV switch. There's a few proprietary solutions which drive OpenV switch in addition to the proprietary's vendors hardware. In general, it's been very stable. In some cases, some folks will find high utilization of CPUs. Also for some of the usage, one of the interesting, for one of the current things, if you're not using a central OpenFlow controller right now with V-switch doesn't have connection tracking. So in terms of how you would support security groups in OpenStack, you end up having to have this separate basically logical hop on the machine where you have to route the packet through a bridge so that the packet's visible to net filter currently. It will probably sometime this year, later this year, when some of the support for OpenV switch will better integrate with net filter so you can write the appropriate rules also with connection tracking so that you can write the flows better without having to have a central controller. And what many folks have found with some of the enterprises in some of the smaller installations where they don't want a central controller is that the functionality provided by the bridge utils in Linux is more than adequate for many of the deployments. Also, if you're doing one of the early benefits of OpenV switch was that you could get tunneling in terms of for the folks using proprietary STT or VXLAN. Now that VXLAN is native in the Linux kernel, you get a lot of the same benefits already. As a matter of fact, OpenV switch now can take advantage of those. Additionally, some people like running DBTK with OpenV switch and get some of the pipeline processing in terms of reducing CPU utilization. Some of those features are a little bit more advanced, a little bit more bleeding edge, so it really depends on, you know, size of your deployment scale. Also, what kinds of features you want to push? Do you have a large ACL set? I'll put that back. Anything else? All right, thank you.