 All right, thank you all very much for coming to our talk today, OpenStack and OVN and talking about what's the latest in OVS and OVN as of 2.7 and beyond. My name is Russell Bryant. I'm Ben Faff. I'm Justin Pettit. So first I wanted to make sure everyone was familiar with what virtual networking was. So on the left here we've got a picture of what a physical network may look like or a physical configuration. So you've got two hypervisors and VMs from two different customers, one that's in green, one that's in purple. And that's the physical topology. But you don't necessarily want to constrain the configuration and the connectivity based on the physical topology. So what virtual networking does is it creates an abstraction that allows you to create logical networks that are separate from the physical ones. So on the right we've got this logical network that we've created. And so regardless of what the configuration is on the left, so those hypervisors or the VMs can move around on the different hypervisors, you have the same connectivity regardless of the configuration. So in the purple we've got a relatively simple configuration with three VMs that are connected with a logical switch. On the right we have a more complicated topology where we have a few logical switches and a logical router that's connecting them. And this is something that is fairly commonly used in Neutron, and I assume most people are familiar with this, but we just want to make sure that we're all on the same page here. So OVN, what OVN is looking to do is create virtual networking on OpenVswitch as a project we started a couple of years ago. It's developed currently within the OVS project. We'll talk about that a little bit later about possibly breaking that out, but currently it's in the same GitHub repo as the rest of the OpenVswitch code. It's developed under the same process that we develop OpenVswitch. So all the code is done on a mailing list. Everything is reviewed out in the open, and then there's a public get repo. Last year we moved the OVS project to the Linux foundation. So OVN falls under the umbrella of the Linux foundation along with OpenVswitch. All of the code is developed under an Apache license. So people are able to make changes. They don't have to share them. We always like it if you do. If you make anything, we always appreciate it, but it's not required by the license. And one of the criticisms that we had for OpenVswitch was that we had sort of an irregular release policy before. So we've been moving to a regular six-month interval for OVS and OVN. And so the last release was in February. The next one will be in August. So OVN provides the features that you'd expect. So we have to recreate all of the networking services that would typically be available in a physical network and software. So you can create, we have a logical firewall. We have logical routers, logical switches. They do IPv4, IPv6, all the things that you'd expect. We have support for NAT, load balancing and DHCP. And OVN works on all the same platforms that OVS does. So it works on Linux. It's usually the main platform that people are deploying and using OVS on. But there's also support for DPDK, which is an accelerated data path, and on Hyper-V as well. Some of the features sometimes vary a little bit depending on the capability. So for example, we leverage the Linux connection tracker for NAT and firewalling. Those aren't available on DPDK and Hyper-V, so we have to recreate those services. So those are all being added, but sometimes there's a little bit of a compatibility. Linux always supports all the features of OVN. The other ones are catching up, but they should reach feature parity. There is support for L2 and L3 gateways. I think that's where the DPDK use case will be particularly interesting, just because it is much faster, or it can be in some workloads, I should say. And OVN, while I think most people are using it for OpenStack, it's really not designed specifically for OpenStack. So there are a number of different integrations that are available. So there's the OpenStack work that's being done with Neutron, but there's also work for containers. So there's Kubernetes, Docker, Mesosphere, Overt, things like that. So as I mentioned, there's integrations that are being done. So the OpenStack is available in GitHub in the networking OVN repo. There's a project called Quilt, which is doing another kind of management that makes use of OVN. And then there's the integrations with Kubernetes, Docker, and Overt available at those links. And then I think Russell wanted to say something too about how Red Hat's involvement. So I'm from Red Hat, and since we made the first releases of OVN, and it's moved out of being an experimental project, we're now moving OVN into products at Red Hat. So the first one is the Red Hat virtualization. It's available as tech preview. And it's also on the roadmap for our OpenStack and OpenShift products. So I just wanted to highlight a few things. I'm actually not going to go read this if we'll make the slides available if you're interested in the particular features. But mostly we wanted to show that the progress that we're making. So the initial release of OVN was with the OpenV switch 2.6. We continue to add new features that are required to bring feature parity, especially for things that are needed in OpenStack. One thing that Ben will be talking about more in the future section is the database clustering. In terms of the OVS roadmap, I think that's sort of the most interesting thing. That's the thing that people have sort of the most concerns around OVN because it uses databases to communicate that we need to have a good story for HA and for distributing workloads. And so Ben's been doing a lot of work on that. And then in terms of OpenStack, the main work that's been done is to bring feature parity with the existing ML2 OpenV switch driver. So I have Russell and other people have been adding support in OpenStack for various features of OVN that are being added and then bringing other features that are needed to OpenStack. And I think the most interesting, or the thing that we should highlight is the work that's being done in terms of migration so that people can migrate from the existing ML2 OVS plug-in to OVN. And Russell will be talking about that as well. All right, so I'm going to talk a little bit more about the OpenStack integration. And so this is a good place to start. I think most people are familiar with the existing ML2 OVS back in for Neutron, the effective default when most people use today. And this is just trying to show how OVN fits in. The top part here represents Neutron server. This is the part of Neutron that presents the REST API. This is common to all users or all back ends of Neutron. At the bottom, we have OpenV switch. So between ML2 OVS and OVN, they both share OpenV switch as the virtual switch on each host. What we have, what changes is this piece in the middle. And the piece in the middle I think about is the back end specific orchestration layer or the control plane. And that's what's very different. So we have a different driver. We implement the ML2 mechanism driver interface. But we have a different set of services. It's OVN, North D, OVN controller, and the databases. And that layer can be reused outside of OpenStack. It can also be reused in Kubernetes, for example. And so that's what changes as you move to OVN. So as I mentioned, the networking OVN project is what implements the back end for Neutron for OVN. The repo is OpenStack slash networking OVN just following the naming convention in OpenStack. ML2 mechanism driver is the primary thing we think about or talk about. It's the most commonly referred to driver interface in Neutron. But we actually implement others. We implement an L3 service plug-in, a QoS notification driver to support the QoS API in Neutron. We also implement a trunk driver. That's to support the newer VLAN aware VMs API or trunk port API. And what networking OVN does, its core function, how it operates, is it takes the resources you create in Neutron and then programs those resources in OVN's northbound database using the OVSDB protocol and uses the Python OVS library to achieve that. So migration to OVN, this is one of the biggest questions. So as we built out OVN, improved it as a technology, improved it at work, implemented the features, one of the big questions is, what do you do about existing deployments, and how do you get there? How do you migrate them to this? And so what we've looked at so far is an Ansible Playbook that can be used to migrate a deployment in place. And we've been able to test this. And at least at a proof-of-concept level show that it works and at least identify the process that you have to take. One of the least in our current approach, one of the assumptions is you're already using the OVS firewall. From ML to OVS, that simplifies the migration a little bit. The approach we've taken so far is an in-place upgrade. So not going through and live migrating VMs around. There's an open question of do we need to take, do we need to build an approach that involves migration? We'd have to do that if we want to do it with no downtime. But the simplest way to do this is if you can accept some amount of downtime. So at least in our testing so far, we were able to migrate a medium-sized cloud with about 10 seconds of data plane downtime, which was better than I expected. We just did a lot of work on ordering the operations so that the part that impacted the data plane, we did that as quick as we could and just had everything staged before we did that. So that's kind of what we're trying to do is see how close, how good we can get doing it in place. And then figure out if we need to take a step further and build a rolling upgrade process. I'm kind of hoping we don't have to do that because it's a lot more difficult. OK, if you're doing a new deployment, how do you do it? Well, you have DevStack. But if you just want to do it as a development environment or simple testing, that's certainly available. And the other thing we have is TripleO. So TripleO supports OVN. There's a heat template that you enable. If you're familiar with TripleO, then that'll mean something to you. Of course, I'd love to have support in more projects, but that's what's built today. All right, I want to talk a little bit about performance. So one of the benefits, there's a lot of benefits to OVN. One of them is how we can reuse it across other projects, not just OpenStack. But another benefit is performance. So our goal is to either meet or exceed the performance of the existing backend. And there's two ways to look at the performance. One of it is control plane and one of it is the data plane. So on the control plane side, these things work very differently. ML2 OVS works quite similarly to how several other parts of OpenStack work. It's built on top of a set of agents, these neutron agents written in Python, and it uses RPC over a message queue. So using the Oslo messaging library, typically RabbitMQ is the most common thing used. OVN is very different. It's a distributed database-driven architecture. OVN replaces all those agents. We don't use the message queues anymore. So one question is, how does the control plane perform? And this is a summary of some of the results from some testing I did about five months ago. And I have a blog post here linked to the bottom that goes into a whole lot of detail of what this is. But what the results shown here are, this is a test done with Rally. And the total number of VMs that were created and destroyed here was about 4,500. But it was done 500 at a time. So we'd create 500 VMs eight at a time, then destroy them all. Create 500 VMs 16 at a time, destroy them all. Create 500 VMs 32 at a time, destroy them all. So we just went through these loops, creating and destroying tons of VMs and timing how long it took. And this shows how long that took. And the improvement was between 70 and 80% in the amount of time that took. It was a drastic improvement with using OVN. And the reason we saw such an improvement here is part of the process of creating VMs with Nova is it blocks on neutron. It waits for neutron to report back to Nova to say, yes, we've actually provisioned the network and you may now proceed with powering on that VM and the network will be ready for you. And so it was that part of the process that added all this time and we cut that way down. So with OVN we support that same synchronization. We do report over to Nova to say the network's ready. It just turned out at least in our testing that with OVN we were doing it that much faster. So that was a really promising result that we saw. And then I just wanted to say a few things about the data plane performance. One of the goals that we had with OVN when we started the project is we've been through a few of these iterations where we did network virtualization and we really wanted to try and take sort of the best ideas from them to really improve the performance wherever we can. And so like Ben and I, our backgrounds, particularly on the data plane were OpenV switch core developers. So a couple of things we wanted to highlight was one, the distributed routing. So the way that the ML2 OVS existing plugin works is that there are namespaces and so if you configure different networks then the traffic is connected with Vs and then they get sent to a namespace and then it ends up requiring multiple classifications in OpenV switch and it turns out to be pretty expensive. It adds up a lot of our time. And so what we do instead in OVN is we do a route calculation. So let's say that you're going through a few different logical routers. Then what we do the calculation to find out what the ultimate destination is. So if you go through a couple of routers you might decrement the TTL by two and then set the destination MAC address to a particular value. So rather than going through the full IP stack of all these different namespaces we know what the end result of the packet is. So we just make a local modification to the packet and then forward it on. We've also done a few things and then that also makes the performance much better because all you're doing is a packet comes in, you modify the headers of the packet and then you just forward it to the destination and it looks as if it's been routed. We also have done a few things with ARP suppression as well. So since we are familiar with the topology of the network if an ARP request comes in the local OVN agent called OVN controller actually receives that ARP request. It figures out what the correct MAC address is and then just sends the ARP reply rather than doing a broadcast message to everybody. So it reduces the amount of broadcast that's happening in the network as well. Another area is that we've, we use the native connection tracking functionality on systems and this is something that ML2, the ML2 OBS plugin has recently or in the last year added support for as well which is to actually use the connection tracking which we've exposed now on Linux for OpenV switch. And so we use the, so for ACLs and for NAT we're able to just keep everything in the kernel. The performance is a lot better than some of the alternatives that were available before. And as I mentioned for some of the other platforms like DPDK there isn't, you have to recreate all of the services. The way DPDK works is that packets bypass the kernel to get sent directly to user space. So you lose, you get a performance gain because you're not hitting the kernel, you're not running through the kernel for all your packet processing. But then you lose the features that you might otherwise use. So we have to recreate those and those features are being added as well. So we've added for example firewall support to DPDK. We're in the process of adding NAT support. So we're recreating those services but we're using the same interface in OpenV switch for all of those and there should be some performance gains from that. So it's not moving, but the next slide was gonna be on the performance of Genev versus VXLAN. So we use Genev for the tunnel protocol. So all packets are tunneled when they're sent between systems to do an overlay. And one of the questions we get is, well, why don't you use VXLAN for that because there's hardware accelerated VXLAN. And so the Genev work while it came quite a bit, Genev is another alternative to VXLAN for tunneling. And it came out a few years later but because of the way the hardware revisions worked, by the time that Genev actually made it to a lot of the NICs that are on VXLAN. But even those that don't have the VXLAN accelerated support, pretty much all NICs support checksum offloading and the performance that you see from just having that checksum offloading and that's available in some of the newer Linux kernels brings the performance up to in some cases exceeds the performance of VXLAN. And Russell did a lot of the performance work on this and is planning to post something from that. I don't have a blog post for this one yet but I've done some pretty extensive performance testing in this area trying to effectively decide for sure based on performance testing with NICs with different capabilities, do we really need to add VXLAN support or not or are we justified not adding it? And my conclusion is that it's really not justified from a performance perspective and I hope to have a blog post sharing those results soon. Yeah, and the reason that we want to use Genev instead of VXLAN is the amount of metadata. So with VXLAN you're limited to 24 bits of metadata to that you can add to the packet that's being sent but we want to be able to add a lot more information like the logical ingress and the logical egress port and that allows a lot of improvements to the capabilities of the system and so we've really tried to use Genev as our preferred platform instead of VXLAN and I think that that turn looks like it's like that was a fine decision but there's been some questions and I think that Russell's numbers will help bear that out. All right, thank you Justin, thank you Russell. Now I'm going to talk a little bit about the future of OVN. We have a lot of things in store. One of the features that I'm working on myself is database clustering. So with OVN to date we have a high availability story that uses Pacemaker in an active backup kind of configuration. That works but it's not very modern and it doesn't allow you to scale out. So we're taking the open V-switch database and adding a raft-based clustering and adding some features that should also allow it to scale out in addition to providing high availability. So another feature that Justin here is working on is that many installations when an ACL is violated and the packet is dropped they'd like to know what happened and why. And so we're adding the ability to log violations of ACLs and eventually bring those to some sort of a centralized collector. So one of the weaknesses of the OVN design is that it puts a lot of trust in the hypervisors. So currently if one of the chassis, if one of the hypervisors is compromised then they could do a lot of damage to these centralized databases. So at the same time we're adding clustering we're also working to add permissions to the database so that the hypervisors can only modify the bits of the database that they actually need to be able to modify which are very small pieces so that that damage could be limited. We're working on scaling in a couple of different ways. Scaling in terms of the size, the number of hypervisors. Some of that happens naturally through the database improvements. And other parts we're working on based on taking our daemon north thee and trying to see if we can come up with ways to shard that so that if that becomes a bottleneck which so far doesn't seem to be that if and when that becomes a bottleneck we'll have a good solution. We're working on various ideas for service function chaining. I believe that we have some patches that are already in flight there. I don't remember the exact status of those but service function chaining seems to be important to a lot of people and you can be sure that that will be there fairly soon. One of the exciting ideas in my opinion is that we'd like to be able to get IPsec support for the tunnels that connect the hypervisors so that you could be assured that the data conveyed among your hypervisors is as secure as if it remain on a single hypervisor. So the hard part there as always is the key management. OpenVswitch already supports IPsec for tunnels so once we have the key management story sorted out there actually turning it on should not be that difficult. Let's see and then Russell has a little information about the last couple of points here. Sure I have a couple of OpenStack features to call out that we haven't started on but I'd really like to get in to our OVN integration. First one is a native driver for OpenStack load balancing and so today you can use Octavia and that works fine with OVN because Octavia works by creating service VMs that run HA proxy instances and so it's sort of a separate layer from what OVN does and that works to provide load balancing on top of OVN but OVN actually has its own native load balancing built in and I'd like to expose that through OpenStack at least for what the OVN load balancing supports it should provide a pretty drastic performance improvement and so we just need to get the driver written there to expose it so I'd like to get that going. Another one is just adding more options and flexibility to our L3 gateway support so there's capabilities in OVN that we haven't exposed through OpenStack yet. One of them that's used in the Kubernetes integration is for a given network to support several SNAT gateways and in distributing the traffic among them. I think that would be a really nice thing to be able to expose through OpenStack. It's there in OVN and we need to expose it. It's also another good example of one of the benefits I think there is for OpenStack to adopt this is we're getting these improvements that may have originated outside of OpenStack. This last one was something that was developed as a part of the Kubernetes integration or in support of that and it turns out it's actually very interesting and useful to OpenStack and so we just kind of get it for free and so I'm looking forward to more of those. All right, let me tell you about something that's a little bit more of an experiment in my opinion. So Linux has had support for something called BPF for a long time. You might have heard of it in the context of TCP dump. When you write a TCP dump command, that filter that you put there that says what packets you want to look at, that actually gets compiled into a program in a instruction set called BPF for Berkeley packet filter. But BPF can be extended to do a lot more. You can use it as a general purpose assembly language that you might compare it to something like say the Java virtual machine. So the idea is to, instead of having OpenVSwitch use a custom kernel module, instead make it possible for OBS to use BPF based code and then we don't need our custom kernel module. So there's several advantages to that. One of them is that we suddenly have a lot more portability. OpenVSwitch user space needs to support several different versions of the kernel because of the differences in the kernel module on those versions. But if OpenVSwitch just came with its own BPF code for whatever it actually wanted to do, then it would be much easier to implement user space. It would know exactly what it was programming against. In other words, against the API that in itself implemented in the kernel. This also gives us the ability to do new and interesting things like, for example, right now you can only use the tunnels that your kernel supports. If your kernel does not support Genev or whatnot, then OpenVSwitch can't use it. But if we could implement those in the OpenVSwitch BPF code, then that would give us much more opportunity for extensibility. This would also, if you take it a level deeper, give us the ability to take OVN functionality and push it directly into the kernel instead of having to implement some of it in user space. So that gives us some new possibilities for extending OVS and OVN in ways that are fast at the data plan layer instead of just being limited to what we can do in user space. So a final idea. So from the beginning, OVN has been part of the OpenVSwitch project and its code has been part of the OpenVSwitch repository on GitHub. And this was very convenient at first, but it's becoming increasingly unclear whether the projects should be that closely coupled. If OVN were broken out into a separate code repository, then it could evolve separately. We could easily have separate release schedules. Perhaps we could have different governance or have independently chosen groups of people who work on them. And this is not anything that we're planning to do right away, but it's increasingly on our minds. And well, that's the end of our preparator marks. We have some pointers to further resources here and you can look those up if you're interested. If you want to listen to more talks about OpenVSwitch and OVN, the last item there is the OVS orbit podcast, which is something that I edit and produce and release twice a month. So thanks everyone. So we have plenty of time left and I think we're available for questions. So feel free to come up to one of the mics in the aisle or if you don't want to ask your question in front of everyone, we'll be here afterward. Please come up to a mic. Yeah, sure. Sorry, I came in a couple of minutes late. What was the motivation for splitting out OVN from OVS to begin with? You wanna miss that? Well, so OVN is fairly coupled to OVS in the sense that there the OVN makes heavier use of OVS features than I suspect any other project out there and some of the features in OVS have been inspired by the needs of OVN and other projects. So that's one of the reasons that they're currently coupled but that's becoming less and less important and it seems more like we're having two separate but overlapping groups develop them and there's not as much need to keep them so tightly interlaced anymore. Sure, my question was the even more basic sort of the motivation for OVN independent of open V-switch. Oh, okay. Justin, do you wanna say something about why we started OVN? I think that's really the question. Oh, yeah, well, I think that sort of the most basic reason that we wanted to create OVN was that we saw a number of people who were building network virtualization systems and they were sort of a varying quality and so we wanted to, we thought that we would be able to sort of bring a unique perspective so a lot of people who have written them in the past were not the open V-switch developers themselves and so when we started the project, we spoke with some of the people who had built these network virtualization platforms and asked them what would you do differently if you're gonna build this from scratch and so we really tried to take the best ideas from those and understand where they think they could have improved that and then also because we understand open V-switch better than anyone else, what are the best ideas we can take from open V-switch and bring those as well and so that was a lot of the motivation for why we wanted to start this project. Hello, I saw you see plans to implement firewall in user space with the PDK but from the other hand, you have plans with the EBPF, do you have any plans to implement firewall in the kernel with the EBPF? Well, I think that that's something that we're currently looking at which is how do we integrate with the kernel for EBPF. I think that we may end up creating some basic connection tracking that is done in EBPF. That's an area of research. What I don't think we'll do is one of the things that you get from the connection tracker in the Linux kernel is ALG support so they can do things like parse the FTP control protocol to know where they should punch holes and that would be pretty difficult to do in EBPF but also there's been sort of a push in the Linux community to not to continue to add those features into the kernel anyway because it creates sort of bloat in the kernel. So I think the Linux kernel is doing sort of independent of us and also what we would do is sort of push a lot of that functionality so you'd have a fast path to send the packets up to a user space program that can do the ALG processing of those sorts of flows. Ben mentions that there are some scalability improvements on the horizon which sounds very promising but my question is which cloud sizes do you target currently with or even say if I have like 2000 hypervisors running hundreds of thousands virtual machine is the right choice for me or I should look for something else. So to me thousands of hypervisors sounds pretty challenging and we don't have any personal experience I think on this stage of trying to run OVN with thousands of hypervisors. We did at a previous OpenStack a developer from eBay stood up and talked about his experiences with OVN and thousands of hypervisors. I believe his conclusion was that for their purposes it was fast enough with 2000 hypervisors but too slow with 3000. Since then we've had some performance improvements so I don't know if it would still be the same, sorry. I was just gonna say that was a couple of releases ago and I can think of at least a few pretty significant performance improvements since then so I would hope that it would do better but it's pretty hard for us to get access to that kind of scale for testing. A lot of that, we do have a project called OVN Scale Test which I don't know if it's been run a lot in the last few months but it was built where you can simulate a much larger scale than what you actually have so taking a single bare metal host and simulating 10, 20 hypervisors on that at least for exercising the OVN control plane and that's been part of how we've been able to test some of those larger scales but we haven't done it in a little while so it's about time we do it again. And I think there's some low hanging fruit that we know that we can address but as you mentioned we don't have that kind of scale so it's sort of hard, we don't like fixing theoretical performance issues and so I think once we get more feedback people start deploying this more than I think there are some pretty clear optimizations that we have ideas of what we think would help but once again we don't wanna start doing that until we actually know if it's gonna make an improvement as opposed to just complicating the code. There was one performance, there was one scale report on the mailing lists in the last month or so that said that with something on the order of 1,000 hypervisors, maybe it was more, the cold start performance for the database is very slow which is something that some of the database we're working on, a database work we're doing now should help to address. I guess you're referring to my report on the mailing list. Oh, was that you? Yes, I'm the guy who I know you, Miss Colability Questions. Oh, thank you, I want to meet you. Nice to meet you. Let's talk after her. Thank you. Thanks for doing this talk. I really think you guys are putting the right effort in the right places for the project so kudos to you for that. Am I under the impression correctly that this is an overlay network at its core? Yes. So I run an overlay network myself and one of the biggest challenges we have is actually just the gateway integration and we kind of like skimmed over that part. Do you guys run that section of code in the project or is it done by somebody else? And if you could just talk about like if it is you, like what protocols you support and if you have any thoughts around that topic. So we do have gateway support for L2 and L3. I think what you're talking about though, if you're talking about the protocol level is probably integrating with the rest of the network like with BGP or some of those protocols and there's not much in the way of that. So we have the capability that it can be configured and it can do NAT for L3 and do L2 basic connections to a network. But yeah, there's not currently, as far as I'm aware, anyone working. There's been discussion about it but I don't know of anybody who's working on sort of integrating at that level of network connectivity. That's something where I think we'd really appreciate some contributions or some support from somebody who operates that sort of thing and knows how to do it properly. Personally, I don't know of BGP from any other three-letter acronym. It's not my... We're gonna begin getting packets. Okay. Okay. I'm just teasing. Okay. I would love to help though. Help wanted basically. Yeah, that would be great. That's an area where we're not large-scale cloud operators and so, I mean, obviously we're familiar with what BGP is but not actually how someone would deploy it in an environment because we just don't have those sorts of workloads in our desktops. So... Sure. Yeah, I'll pick you guys later. Okay, great. Yeah, that's great. Thank you. I try to search the material online, like how you do the metadata service so I couldn't find much there. So is that anywhere... So right now, your best bet is to use config drive. So metadata is one of, if not the very last feature parity item. Okay. And we have a pretty detailed design document up if you look at Garrett against the networking OVN repo that describes how we're doing it and we're doing it in a way that's fully distributed across all nodes and it does require one new feature to OVN that there's patches up for review or it's actually on the second or third iteration right now for the OVN piece we need to be able to fully distribute the metadata proxy portion and I expect that we'll have it done this release. It's no progress. Okay, thanks for the clarification. Second question, it's probably a little more detailed. On the netting implementation, do you have the traffic go through a namespace at all or is this complete inside of the OVN slash kernel the tracking through the... There's no namespaces at all. No namespaces at all, okay. It modifies the connection tracker, the net filter contract directly in Linux and then we're adding the support for DPDK. Okay, all right, thank you. Sorry, I think he was first of them. Yeah. Hi guys, thanks for presentation. I just have one question, multicast? Yeah, there's no, well, the answer is no but did you mean, but specifically then did you mean in the... What's your interest in the overlay or the underlay? Underlay. Underlay, yeah. Yeah, no we don't. I mean, and notice some of the deployments. If you have that anywhere on your roadmap, any tests, any kind of quality tests with multicast, supporting telcos needing multicast inside the cloud for streaming and... We don't, most of the experience that we've had or at least I think that Ben and I have had is in data centers, which is like more enterprise data centers from what I've seen, they typically don't run multicast but if, yeah, but we don't have any plans for it. It's something that we could certainly look at but yeah, there's nothing on the roadmap currently. We have a productive environment in Swisscom running in multicast with OBS. Oh, okay, all right, it'd be great if we would talk later about that, that would be good. Thank you. Yeah. The Ansible playbook for migration, I don't know that I had a link to it but go to Garrett, it's sitting up against the networking OVN repo. Okay, just to follow up on the gateway question that the other gentleman asked, floating IPs, is that supported today since you guys don't use namespaces or anything? Yes, it is supported. Okay, great, thank you. And it also supports the, well, either pinning them to a gateway node or having the floating IPs bound to each hypervisor like you can do with ML2 OVS. OVN supports floating IPs in that mode as well. One of the pain points with ML2 OVS today is sort of the management of the L3 gateways. We call the network nodes. So how is that, I mean, you guys mentioned that it's maybe a little bit better with OVN but do you have a diagram or something that you can put up or? We don't have a diagram for that. But what do we have? I mean, there's not a whole lot of management. You can, it's sort of up to you as in how you design your deployment, whether you have dedicated network nodes or not. Which node is chosen as a gateway node is up to networking OVN. That's the open and the open stack layer. And it chooses amongst the hosts that have access to the physical network that it's gatewaying to. So it's looking at it and it'll look amongst all the hosts that have been configured with a bridge mapping that says here's how you access this physical network and picks one of them based on what existing gateways have already been scheduled. It's kind of a pretty simple approach but you can either use your compute hosts as gateway nodes or separate nodes, sort of up to you. But does that make sense? Yeah, it does. It does. Thank you. Okay, so we're out of time but if I can add just one final remark. We have the open V-switch open source day on Wednesday here at OpenStack and we'll have a series of seven sessions, many of which are divided up into more than one talk. And it includes a lot of material about OVN and even an OVN tutorial. So please consider attending some of those sessions on Wednesday. Thanks everyone.