 All right, well, let's get started. First, thanks everybody for coming out. My name is Ryan Tidwell. I'm with Hewlett Packard Enterprise. Today we're going to talk a little bit about some of the work that's gone on in Neutron over the last couple cycles. Talk about specifically dynamic routing and its applications in the cloud and what we've accomplished so far. And hopefully you'll get a chance as well, time permitting to walk through a bit of a demo to show you some of the code that we were able to get into Mataka. So to start off, I just kind of wanted to set the stage and kind of motivate the problem that we're trying to solve. And traditionally when we've thought about open stack deployments, I'm going back to even the Nova Network days before Quantum Neutron, there's kind of been this thinking that I have tenant networks, they're private networks, and all of the fixed IPs that I allocate on those networks might just hide behind a floating IP for north-south traffic. And of course I don't worry about floating IPs for east-west, because I'm just going to put everything on the same network. As things have evolved and people have tried to put open stack into production, we've found that that model doesn't quite work. And so we've evolved to things like DVR and Neutron, which will actually route your east-west traffic in a distributed way. And so we've kind of moved beyond that. But the key thing is that for north-south traffic in and out of the cloud, your best bet has always been to kind of hide behind a floating IP or SNAT, but I want you to imagine a world where floating IPs and SNAT don't exist. And when you think about IPv6, this is sort of the world that you live in. We don't support floating IPs for IPv6 and Neutron. So in that case, how am I going to route traffic down to my VMs? They're not hiding behind a floating IP. I might have subnets, prefixes that are popping up in my cloud that I need to route. And there's ways to do that. So this is kind of where we start off the talk. And you have a couple of options. Now for traffic egressing the cloud, Neutron kind of takes care of that for you, at least on the outbound flow. As I've outlined here, Neutron will carry the gateway IP for the external network as an attribute. And that gets programmed in your Neutron router as your default gateway. And so that's great, because when my VMs start sending traffic out of the cloud, that all gets routed upstream just fine. I'll say I'm trying to open a TCP connection, something bi-directional. That traffic needs to make it back to my VM. And this is kind of where the problem we've been trying to tackle comes from. And of course, this is not a new problem. Since the dawn of IP, people have been puzzling over this. And we've solved this problem basically long before I came into the industry. And it's a basic IP routing problem. Am I going to statically route traffic, or am I going to use a routing protocol to advertise subnets in their next hop? So we can statically route traffic, which involves putting manual entries for next hops in the infrastructure. Now if you've got a small cloud, this probably might work OK for you. If you don't expect to route a number of prefixes, and you kind of expect them to stay static, a routing protocol might not necessarily be for you in your deployment. However, if you have a more dynamic environment or you're scaling up a little bit bigger, this approach requires operator intervention any time a neutron router is created, any time a subnet is instantiated behind that neutron router. Conversely, as the subnet is deallocated or the router is destroyed, you might want to go clean up those routes. So there's some operator overhead involved there. Or there's dynamic routing. Going to configure a routing protocol of some kind that's going to have your routers advertise, talk amongst themselves, and kind of negotiate where next hops are for these subnets. And this is the functionality that we've been working to bring to Neutron over the last couple of cycles. And as kind of a philosophy, what we wanted to do is try and isolate the Neutron L3 agent from these changes. And in thinking about this, BGP kind of seems like a nice place to start. It allows us to kind of separate the control plane and from the data plane, in this case being the Neutron L3 agent and the namespaces the routers operate in. So what did we get into Metaka? Where are we at right now? So what we have right now is the ability for Neutron to announce next hops for tenant networks that are behind a centralized router. And I apologize for that formatting. I didn't quite get that cleaned up. You get the point. So we can advertise next hops for tenant networks that are behind a centralized router or a CVR. We can announce next hops for tenant networks that are behind a distributed router, DVR. You might have heard of that. We can also announce host routes for floating IPs. And this announcement is compatible with both centralized routers and distributed routers. And I'll talk about how that all works later. I've got some examples that will illustrate that in action. But before we go too much further, I just wanted to give a couple of shout outs to a couple of developers who weren't able to attend the summit this time. Vikram from Huawei and Neumann from Red Hat contributed significantly to getting this work done in Mitaka. So shout out to them. So the architecture. What you get in Mitaka is a service plug-in that runs inside your Neutron server. So this is going to run alongside your L3 plug-in that you use on the Neutron server that exposes all the L3 APIs. And then what you're going to have is a dynamic routing agent, or the DR agent you might see it referred to. Inside the dynamic routing agent, we've exposed a driver interface. So the back end BGP implementation is pluggable. So in Mitaka, our reference implementation drives a Ryu BGP speaker. The interface is there so we could in the future bring in support for Quagga or Bird or name your BGP speaker technology. And the dynamic routing agent, like all the other agents in Neutron, communicates with the Neutron server, gets its directives via RPC calls. So this is a simple thing you can do. This is not prescriptive, but just to illustrate one way you might deploy this in practice is this figure right here. So I have two autonomous systems that I've configured. One of them, the one on the bottom, is an autonomous system that I use for my cloud. And you can see the dynamic routing agent there, which is communicating with OpenStack via the Neutron RPC over RabbitMQ. And inside that agent, there's a driver, which is communicating with your BGP speaker technology, be that Ryu, Quagga, whatever it is. And it's opening that peering session with your infrastructure router in another autonomous system and exchanging route information over that peering session. In this case, this is an EBGP peering session. We can support IBGP as well, and that's no issue there. So now this is where things get interesting. What routes are we going to announce? And that's a good question to ask, because we don't want to create a BGP speaker and then be announcing prefixes that we have no business announcing. So we want to scope that and give operators a way to control that somehow. So as I mentioned, we can announce host routes for floating IPs. And prefixes that need announcement are discovered. And I'll talk about this in more detail. I've got some slides that'll walk us through it. And I have a demo as well. Prefixes that need announcement are discovered by associating a BGP speaker with an external network. And what happens is on that external network, we look at all the neutron routers that are connected to it. And then from those routers, we go and look for subnets behind those routers that need announcement and announce the neutron router as the next hop for those subnets. Now the scoping of what we announce through any given BGP speaker is heavily influenced by address scopes. So I think it would be useful to just touch on address scopes real quickly, because they're kind of fundamental to the way that our implementation in Metaca discovers routes that should be announced. If you're not familiar with address scopes, Carl Baldwin and he gave a great talk this morning. I didn't even pick that up on YouTube, I encourage that. But I'll touch on it briefly here. So this was really had the bow put on it in Metaca. And the idea is that subnets and neutron can be scoped and we can enforce uniqueness and kind of communicate the notion of a routing domain inside neutron. And if you think of what an address scope is, think about your home network. So residential broadband connectivity is probably a good context in which to try and describe this concept of address scopes. You have the public internet, which your internet service provider is bringing to you. And connecting to your home router, you typically get one IP if you're running IPv4. And behind that router, you're addressing your network inside your home, 192.168.1.0 or something with a 10 prefix, something that's not routable on the public internet, but is routable inside your home. So your home router here kind of creates this barrier between address scopes. And the way you traverse those address scopes is through NAT or port forwarding or tools like that. So this is kind of a quick overview of address scopes. And in this diagram, you're going to see three tenant networks. And two of these exist in the same address scope as the external network. And there's a third here that's in its own address scope. Now what gets announced by Neutron is the two subnets that exist in that same address scope as the external network. So in this case, 1521.64 and 1693.64 are going to get announced, but their next hops being the Neutron routers that they're connected to. And then on the bottom right, that tenant network doesn't get announced by our BGP speaker because it's outside the address scope. All right. So let's move on to announcement of floating IPs. With Mataka, this really isn't quite as compelling of a feature, but we went ahead and implemented this because if you're familiar with the routed networks proposal that's being worked on right now inside Neutron, this is going to become more important and more useful to you in that context. So the idea here is we're going to announce host routes for your floating IPs. When we get to routed networks in Neutron, the floating IP range isn't going to be bound to an L2 segment. So your floating IPs actually float across L2 domains. And the next hop for that floating IP may not necessarily be at a Neutron router. It may actually be right directly on the port where your fixed IP lives. And so the way you announce the next hop, it would be different. But we wanted to at least lay the groundwork in Mataka to do these announcements. And an important thing to point out here is if you don't want your network to be flooded with host routes from these floating IPs, that can all be toggled on your BGP speaker. So you can suppress announcement of host routes or even tenant network routes if all you want to announce is host routes for your floating IPs. So there's knobs that you can turn inside Neutron to kind of tweak what gets propagated through BGP. All right, so before I go into the demo, I want to kind of explain the environment that I've created here. And so you kind of know what you're looking at as I walk through it. So what I've created here is a Neutron router with three tenant networks connected to it. It has an external network, and that's connected to a Quagga router that I have in my environment. I have a Neutron dynamic routing agent, which I'm going to show you. I'm going to actually make it peer with that Quagga router. And you'll see the tenant networks get announced and see them show up inside Quagga. So let's start here. So I've gone through the trouble of kind of creating all these resources and kind of putting things mostly together so we don't have to do that here. If I run a Neutron BGP speaker list, this shows me the BGP speakers that I've created in my system. Now, an important thing to note is that this is an admin only function. So this is not exposed to your tenants by default. So this is something that admins have access to, but your tenants do not. So typically, as an operator, you might set this up on behalf of your tenants so their networks can get announced in your network. So you can see that I've created it with a local AS of 1, 2, 3, 4. So that's the autonomous system that I've created for my cloud, basically. And this BGP speaker is speaking IPv4 routes. We do support IPv6 as well, just as a note. So we've tried to keep support equivalent for both address families. Complete that command. So if I run a Neutron BGP peer list, this shows me all the BGP peers that I have available for my BGP speakers to peer with. So I have a peer IP. Again, that could be IPv4, it could be IPv6. Our reference implementation supports peering purely over IPv6 or over IPv4. And I've configured a remote ASN of 4, 3, 2, 1. And then some of these commands are a little long. So if I just want to show you all the BGP-related CLI commands that exist, I'm just kind of gripping for anything BGP. Now you can also take a look at what routes you expect to be advertised by your BGP speaker by running this command. So now notice that in the previous slide I showed you, I'll go back to it. I had three tenant networks that I configured. But if you look at what's getting announced, you only see two subnets getting announced. Now why is that? This comes back to the discussion around address scopes. So these two subnets that you see getting announced in this list were created not only in the same address scope as each other, but in the same address scope as the external network that their router is connected to. So Neutron sees that and says, OK, this is all part of the same routing domain. And I can announce these. But there's that third tenant network, which is not part of the routing domain, so it doesn't get announced. So that's why you only see two subnets here. So I'm going to switch over here to my Quagga router. I'm going to show you what that configuration looks like. So on the Quagga end, and then this sort of configuration would apply across vendors depending on your router technology. But essentially, I've defined my BGP process that I'm going to run on my Quagga router. I've set my router ID, and then I've also configured my neighbor. In this case, the neighbor is the IP address of the system where our dynamic routing agent runs. So this is where you're going to expect Neutron to be peering from. And the remote AS is 1, 2, 3, 4. That's the ASN that I'm using for my cloud in this example. Notice I have the line here which makes the peering passive. We're not familiar with Quagga. What this does is this is going to keep Quagga from trying to reach out to that peer and initiate a session on its own. So this is going to tell Quagga, just sit back, don't call us, we'll call you, sort of thing. So now let's go into the Quagga interface here and actually kind of poke around and see what we have. So now notice I don't have anything in my routing table except for the networks that are directly connected to me. Why is that? I don't have an active peering session open with my neighbor. So we're going to go ahead and change that. So we're going to get this going. Now once I, so I'm going to flip back to Neutron here and run some commands in Neutron. So we've created all this inside of Neutron. We've created the BGP speaker. We've associated that BGP speaker with an external network. And so there's these commands here. BGP speaker network add, network remove. This is where you do the association of your BGP speaker to your Neutron external network. And then this is where you remove it as well. So because I didn't want to worry about remembering UUIDs and all that, I've got a script here. This is what I'm going to run. I just wanted to show this to you. You don't think I'm cheating. I'm going to run a Neutron BGP DR agent speaker add command. Now what this is going to do is this is actually going to take that BGP process that we've constructed inside Neutron and tell it, go run on an agent. So just because we've configured all this in Neutron doesn't mean it's actually up and running it. We actually have to tell it to go run on a specific agent. We have to schedule it manually. So if I do a Neutron agent list, that's a little hard to read. Trying to blow this up so everybody can see it. This shows me all the Neutron agents that are running. And you'll notice that there's my BGP dynamic routing agent phoning home to the Neutron server saying I'm alive. So what we're going to do is I'm just going to go ahead and run that script, which is just going to associate the BGP process we've defined with an agent and get everything up and running. So you notice the second thing I did after doing the association with the agent is I listed the BGP process that has been scheduled to the agent. So I can go agent by agent and say, well, what BGP speakers do I have running here? And it'll show me that, which is kind of useful. So this has given us time for the session to open up. And we should have, yep, cool. So Quagga is now saying that I've got a neighbor, a remote router ID of 192.168.1.59. That's my Neutron dynamic routing agent that has now opened a peering session with Quagga. Cool. Now let's take a look at what we've announced over that peering session. So now you'll see anything tagged with B, if you're not familiar with Quagga, that first line right here is something that was learned via BGP. So you can see we've got two routes that were learned by BGP, 10.0.0.0.24 and 192.168.0.0.24. So this is Neutron over a BGP session telling my upstream router the next hop for these tenant networks, which would be the Neutron routers, or the one Neutron router that I've created in this case. So that was BGP in action. So we've got some work ongoing in Newton. We have some enhancements we want to make and features we want to add. So there's a couple things in flight. First of all, when we first created this code, it was kind of a little bit of a slow process to get it into Neutron. We had to get core viewer attention. They've got a lot on their plates. This involves a lot of code that lands in the Neutron repository that's just dedicated to doing dynamic routing. And so there was kind of an idea floated, well, hey, why don't we spin this out? This is a great candidate for something to live in the Neutron stadium. And this was kind of in the middle of the Metaka release. And we said, well, that's kind of disruptive to basically none of this is going to make Metaka if we get kicked out of the Neutron repository right now. But we decided the expedient thing to do was to land this code in the main Neutron repository. And then in Newton, turn it into a stadium project. So that's the work that we're starting with in Newton is trying to move the code between repositories, which could slow us down a little bit with getting some of the features that we'd like to get completed into Newton. I'm hoping it's just easier than I think it is. But beyond that, we'd like to enable announcement of host routes for fixed IPs when you're using DVR. And the reason that you might want to do that is right now if you want DVR treatment for your north-south traffic with IPv4, you have to attach a floating IP to every fixed IP you have. And you may not want to do that. Remember, we're living in a world where you may not have floating IPs. You may not want to use floating IPs. I'm going to give people choice. Unfortunately, the only way to get traffic directly to a fixed IP with DVR is to send it through the central SNAT node and use that as the next hop. So your north-south traffic isn't really getting the same DVR treatment that your east-west traffic is. The reason that we didn't enable these announcements in Metaca, I mean, I have the code. I just didn't push it because there's still some kinks to work out with DVR inside the L3 agent. It turns out, as we're starting to look at this just over the last couple weeks, these fixes are fairly simple. And I do expect them to land easily in Newton. So once we have that, then we'll go ahead and get an announcement for fixed IPs going, such that you'll get north-south DVR with both IPv4 and IPv6 if you use BGP. The next place we'd like to go with this is to use BGP to configure L3 VPN and then use EVPN technology. We're still hashing out what that looks like. And I expect a blueprint to show up shortly that we're going to start iterating on. So those are our areas of focus for Newton. And just going back to my comments about DVR, I just wanted to illustrate it with a slide specific to that, just so you understand what I'm describing here. So what I've shown previously is that the next hop that we advertise through BGP is the gateway port on a neutron router. Well, in DVR, that doesn't really give you distributed north-south data flow. You don't get fast exit on and off the compute node because the route says send this to your central SNAT node. So what we're talking about doing is with DVR, we call the floating IP namespace, or you might hear it called the FIP gateway FIP namespace, it has an IP on the external network. And that is what you really want to use for your next hop when getting to these nodes. And so that's what we want to advertise through BGP for these nodes. So if you look at the routing table here on the right side of the screen, that's what you would expect in this scenario. So we'd be directing traffic directly to the compute nodes, bypassing any centralized network node or neutron router. Not supported in Metaca, but easily supportable in Newton, as long as we can get the associated fixes to DVR in, which, as I mentioned, seemed fairly trivial. So with that, I want to leave some time for many questions you might have. My phone's up here at the front. So thank you. Yeah, I just have a quick question. Does this dynamic routing with BGP, does it work in a triple O scenario? So if I'm running Metaca over Metaca, and my overcloud is doing, I'm trying to do BGP advertisement on my overcloud. And with my base cloud also doing BGP dynamic route, how do I make my inner, the overcloud tenant network get advertised out? That's a great question. So I guess the short answer is where are you using neutron routers? Are you using them in your undercloud or your overcloud on both? So I mean, you could use BGP in both, I suppose. You've got neutron routers in both your overcloud and your undercloud, and you could have a BGP speaker in your undercloud and one in your overcloud. That's actually what you bring up as a scenario I've never considered. I'd have to think through that a little more, but that's the best answer I've got for you. About the BGP agent HA, if you manually associate this and you configure the IPs on the router side, is it possible to do HA here? And what's actually the impact if the BGP agents go down? Right, that's a great question, HA. We get that all the time. So given the tools we have to work with right now, I think the answer is to deploy multiple agents. So if I just need one BGP speaker, want to do some HA, maybe I might deploy two or three agents, depending on what level I want there. You can actually schedule your BGP speaker to multiple agents so you can be peering to all those routers from multiple places. So if you lose one agent, you still have a place where all your neutron routes are getting announced from. So that's the best answer that I've got for you right now. When we've got BGP to open stack, can we finally accept provided independent addresses of our customers? Can we road some random addresses directly to virtual machines? Can we give some tenant a via system for users to self-serving own IP addresses? Yes, yes, absolutely. So the way that would work is as the operator, you'd lay down your external network. But when you go to address that in neutron, what you want to make sure is that you're allocating those subnets from a subnet pool in neutron. So if you're not familiar with subnet pools, maybe go take a look at that. Because the addresses that you use here with dynamic routing need to come from an address scope. Yes, but users can have own provider independent addresses. PI allocated addresses not linked to register. That's right. So yeah, I'm getting there. So your tenants could then create their own subnet pools. Bring own addresses. And then allocate out of there. They could fill their subnet pools with whatever prefixes they want. And those would get advertised. Not every prefix they want, every prefix they own. Every prefix they own, yes. Absolutely. Yep, yep. You can support that today. Thank you. It's about mobile IP. Can you tell me that the neutron, the agency or committee that supported mobile IP in the few days of the unit map for mobile scenarios supported in the neutron? Can you elaborate about what you mean by mobile IP? Mobile IP is when the tenant, when the VM with the public IP detached from the dedicated router and the mode gets the motivation to the different routers. Support the mobile IP, support in the current IPv6. I haven't considered that. When the IP changes, it can't be routable. Maintain the section, this route, continues the section, and must the IPv6 router can't be changed for the routable, for the outside of the internet. I have to apologize. I'm having a hard time following the scenario here. The IPv6 has a lot of solutions for supporting the mobile IP. I just wanted to ask if the neutron committee can support these advanced scenarios. Maybe let's chat offline if you've got a minute, Tom. I'd like to understand that. So a great demo. Thank you. Good, cool work. Do you, does the MATACA implementation that you have right now, does it support having two peers, two external peers, or multiple external peers? So yeah, great question. So you can actually configure a BGP speaker to pair with any number of peers you want. Any number, OK. So for simplicity of my demo, I just did one. I did one, understood. But you can peer with whatever you want, however many you want. And in fact, when you define the list of peers in neutron, those are actually reusable. So you don't have to define a peer set for each BGP speaker. Got it, got it. Can L3 agents peer to themselves, peer to each other? So right now the functionality that, so first of all, you mentioned L3 agents appearing with each other. So the L3 agents actually don't peer. It's actually the dynamic routing agents. So the BGP is kind of happening a little bit out of band. OK, got it. Now to the second part of your question, do they peer with each other? All we've enabled in MATACA is the ability for neutron to announce, not to consume routes. So you're not really, we don't support peering with other agents yet. Understood. I'd be curious to understand your use case for that, because that sounds like something we could enable for you. Kind of like a connectivity broker between tenets. Sure, yeah, absolutely. Kind of a model. But OK, very cool. One other thing, this addition of BGP state that kind of that, or how much BGP state is the dynamic routing agent keeping on this BGP peer, and how much workload is that adding to the overall dynamic routing agent to the L3 agent? Did I just reduce the scalability of the dynamic router paradigm and control? So it actually, I don't have numbers to back this up and to quantify it. But the overhead is actually pretty lightweight, because it's not in the forwarding path. So really the resources you're consuming are just the resources needed to maintain a peering session, just a TCP connection basically. So those are kind of the resources it consumes. Very cool, thank you very much. Sorry, ladies and gentlemen, we are out of time for this session. Any questions can be done on the hauler side stage. Thank you.