 Sorry about that, we'll get started now. I'm Ryan, this is Jama, this is Vikram. We're going to be talking today about BGP dynamic routing and Neutron and some of the work we've been doing there over this last cycle. We're going to touch on a couple things today, routing cloud network traffic in general, giving an overview of the dynamic routing service that we've been putting together, some of the applications of it, and some future work. And hopefully we have some time for some Q&A at the end. By way of background, I just want to kind of put this out there, is that the work we're doing has not merged yet. We had hoped to get it into the Liberty cycle. It's not quite there. So there's that caveat to what we're talking about today. Development is ongoing. So this effort started back in a couple cycles ago. Jama got some patches up, and we've started iterating on those. And that's kind of taken us to where we are today. So I wanted to kind of start off with just an overview of the problem that we're trying to solve and how we think that we can use Neutron to solve some of these problems. So when we're talking about routing cloud network traffic, you can think of a Neutron network as being essentially a subnetwork, with just a default route that it uses for the Neutron router to send traffic outbound. And the IP for that next hop is just read off of the subnet on the network. It's the gateway IP field. That's all great. But on inbound flows, we've got to communicate the next hop for the Neutron network to the infrastructure somehow. So there's two ways, typically, that you can handle that. Static routing in the infrastructure or a dynamic routing. So first, you can statically route all your tenant networks. This requires some manual configuration of next hops in your infrastructure. It requires some operator intervention as routers, floating IPs, subnets are created, updated, and deleted. Prefixes don't move easily between Neutron routers. Or there's dynamic routing, where the operator can configure a routing protocol amongst the infrastructure routers. And what we're doing is we're making Neutron insert routes into that routing protocol, reacting to floating IP crud, subnet crud, things like that, so that these tenant networks, as you can see at the bottom of the slide, have an inbound flow that goes through the Neutron router appropriately. One of the things we wanted to do with this is isolate the Neutron L3 agent from some of these changes. So what we want to do is have Neutron have a BGP route server kind of separated from the L3 agent and use that to peer with infrastructure routers. And we think BGP is probably the best solution for solving this problem. One thing to note here is that as of now, we're targeting use cases where Neutron simply advertises routes to peers. It doesn't learn anything from the infrastructure. We can touch on that point later. So YBGP, I'll hand over to Giaoma to discuss that. Can you hear me? OK. So first, there is a clear separation between the data plan and the control plan. We didn't want to overload the data plan of Neutron, because we think it's an old overload. Also, the reduced cases there, we need to advertise routers to different autonomous systems. And OSPF and ISIS doesn't handle these use cases. And also that we don't need to expose a big complex routing network. We just need to say what's the floating IP or the network IP addresses of the tenant routers. We don't need too much. So what are the applications that we have or features that we have right now with the BGP dynamic routing that once it will be merged, as Kaka said, we haven't merged yet? So first, we have a routing model for what the floating IP is. We can span different L2 networks and have the same floating range in both, or in all of them. So we separate the external network from the floating range as the first step that we want to do in Neutron. Then, we have dirty root table IPv4 and IPv6 external networks. IPv4 is not so important, but this is focused basically for IPv6. I'll explain later both use cases. And also for DVR, who Ryan is going to explain as well later in more detail, because they have some kind of problem right now to routing a floating IPv6 networks to directly compute hosts. And we can improve the DVR with dynamic routing. And then after that, we can talk about future applications, routed network segments. Ryan is going to talk again, because I'm not sure if I understand this part. And Carl is about to hear his, how he'll help us. And there is a lot of people interested in L3 BGPBPN. I think a lot of people is going to raise their hands when we are going to, about asking for that. And also, we have a particular use case in Midokura that we can, we want to advertise a floating range to using not the Tenenrouter routers, but the Gateway router. And we can leverage dynamic routing for that as well. So this is the use case that, this is the schema that shows the use case for floating IPs. So the idea is we have two different L2 networks with its router. And we have a busy speaker who speaks on behalf of each one of the Tenenrouters. We speak on behalf of them. We don't have a BGP speaker on each Tenen, because as you know, the BGP needs to be configured in both sides of the connection. So if we had to configure the Gateway, each time a Tenen creates a router, it doesn't make sense to have this feature. So we have a busy speaker who works by itself. He speaks on behalf of the Tenenrouters. This way the two L2 networks can communicate to each other using the same floating range, because they have a BGP connection between them. If an inbound packet comes to the Enterprise Negro or ISP, can know where to route the packet depending on where the floating IP is hosted. This would be one of the use cases that right now we can do it. And the next one involves other scopes. Other scopes is a feature it's not measured yet, but it's some kind of way to define L3 domains. Maybe if I showed the schema right now, it's easier to understand. So imagine the gray-faded square is another scope. Another scope what they do is to put together subnet pools. Subnet pools are just a neutron. A subnet pool, what it lets you to do is to define a big pool. Well, this is mainly for IPv6, but I put IPv4 because I cannot imagine myself in the future right now to see an IPv6 and see what's going on. So bear with me, and let's imagine this IPv4 addresses or networks are actually IPv6. To define a big range of network, and you can ask to give me a new subnet of this subnet pool, I don't mind. I just want a small shang of this network. And subnet pools guarantee that it's not overlapping. So other scopes is one step beyond and put together subnet pools. Other scopes will guarantee that different subnet pools from the same other scope will not overlap neither. So that means that anything inside the other scope can be rooted. It doesn't need to be netted. This way, when a tenant creates a subnet pool, a subnet from a subnet pool, we can guarantee that it's not overlapping with anyone else. You can root directly the package from the external network from the tenant network. And this is something that for IPv6 is so useful. And I put a different tenant that's outside the address that is a scope in the schema that has the same range as the subnet pool of one of the other tenants. But there is no problem, because since it doesn't belong to the other scope, the packets that comes from the inbound traffic to the tenant 3 gets netted. So this is the difference between the two use cases. And then we have another use case, especially for Midokura-Midonet, that we would like to define the gateway router as a Neutron Router. So you can put a software-defined gateway router in the top of the external network. If you could do this, you could be able to configure the VGP speaker to you could have an SDN controller that implements the dynamic routing and automatically advertises the networks of the Flotting Ranch networks. So if you Enterprise grows or you want to change or extend your Flotting Ranch, automatically the internet can, people from outside the deployment, can directly route packets to the Neutron deployment. And that's basically the use cases we want to cover. So there's a DVR application to all this. One of the things that you'll probably run into with DVR when you're not using floating IPs is how do I get that full DVR capability on both outbound and inbound flows that traverse the external network when I'm not using a floating IP? Right now, without publishing host routes for each port, basically you're stuck routing inbound traffic through the centralized router. And you lose some of the benefits of DVR. This becomes important in IPv6 where we don't do floating IPs for IPv6 addresses. So one of the applications of this is that we can advertise a host route for every port that points the next hop to the compute node. One of the things that we'll be discussing this week at the design summit is some refactoring and some additions to some of the L3 APIs in Neutron. And one of the things that we can do is separate the floating range from the external network itself. So right now, it's really just a subnet that you put on the external network. And what that means is in the DVR case, your compute nodes have to consume an IP address on your external network. And when you're using floating IPs, we end up pulling an IP address from that range. So when we have IPv4 addresses, those are very precious. We want to try and conserve those. Now, if we can combine BGP with some of this refactoring that we want to do in Neutron, the external network really can just be treated as a link local subnet. And so we don't have to consume public IP addresses there. And by advertising host routes, we can route east-west traffic with DVR and north-south traffic with DVR. So in this example, you can see we have two compute nodes, two tenant networks on the back end, and an external network on the front end. And two instances on each tenant network. With BGP, the east-west routing happens as it does today. With the addition of BGP, we can speak into the protocol with host routes for each of those instances that produces a routing table like you see in this diagram. One of the challenges with this course is going to be scalability. You're publishing a host route for every port. That upstream router needs to have a routing table that's capable of handling the scale that you want to run at. So that's one of the challenges here. And there may be some creative ways that we can maybe mitigate that. And then we're looking at those as we go forward in development. So I hand it over to Vikram to talk about architecture and what this solution looks like today. OK. So all said, what we are going to achieve and why we want to do. So now the thing is how we have achieved this. So right now, this particular solution is being implemented as an advanced service plugin. And it's not a separate project in itself. It's being developed as a part of Neutron itself. So the idea is we will have a Neutron service plugin. We will have a scheduler, and we will have a DRHN. So let me div die. So what do you mean by a scheduler? So a scheduler, we can think of an entity which schedules a BGP speaker. What I mean by a BGP speaker, by the way. So BGP speaker, you can think of an entity which speaks BGP. And you can have a lot of BGP peering stations belong to a particular speaker. So who is going to speak the BGP routing staff? So what we thought is our architecture is very flexible. And we have a driver's plugin where your agent can speak to multiple drivers. So we don't depend upon actually the driver implementation of it. So our interfaces are flexible enough to incorporate different vendors. So right now, as a POC, we have done use review as a driver for a BGP speaker. So in the current implementation, we have a particular scheduler can schedule a BGP speaker on multiple DR agents. And the DR agents will talk to each driver. And the driver will do the BGP functionality work. So going ahead, it's a sample deployment. What could be your sample development? It could be like, we have two boxes. The above one is your external network, which is outside neutron network. The below one, you can think of it's a neutron network. So we can find a BGP dynamic routing agent is there, which has two connections, one to the external network, one to the neutron. So what BGP speaker do is the BGP DR agent, the BGP DR agent will speak to the neutron server using an RPC mechanism. And with the outside router, it makes a BGP pairing session with the driver using the drivers. So this is one of the sample deployment. The other deployment could be where you have distributed architecture, where you want your multiple DR agents to be deployed. So in this scenario, you have your DR agents running on each of your Tor switches, which speaks to the external network. Great. So all about architecture. So this slide says about our potential application, where we can use it. So this is one thing is, as Ryan and Jomi was pointing out, the current work is only about just advertising a route outside. But it doesn't tell anything specifically like, oh, what kind of route it is, whether it's a VPN route or it's just a plain route, right? I just advertise, how about I want to achieve MPBGP using it? I want L3 VPN support using it. Yeah, it could be done. As Jomi was pointing out about address scope, so what we can do is we can have different address scope, can attach different RD and RT's value for advertising routes. So in that case, we can achieve the L3 VPN functionality. So all said, I think most of us knows why L3 VPN we need. It's easier to manage because of MPLS technology. MPLS is widely spread. And most of the providers will be loving it. I think I will hand over to Ryan. Yeah, so piggybacking on what Vikram was talking about with L3 VPN, this is not a use case that we have taken on to implement yet. So we have code up for review that we're still iterating on. It really focuses on just the publishing of routes for tenant networks and host routes for floating IPs. We would eventually like to get to the point where we can bake in some L3 VPN support. We're here at the summit talking with other neutron developers about how we can accomplish that in the next couple of cycles. Some future work, L3 VPN, we mentioned. I think this is a good place to hand off to Jelma. Walk you through this. I think another future goal to coming back to other scopes, it would be to have another scope that only involves tenant networks and coming back to the schema. You could have other scopes that only have tenant networks and you could use BGP and PLS to send packages, lava packages from one tenant router routed to another tenant router from the same tenant, of course, using the external network as a bare-bone network. But before this, I think we have to achieve the discovery of the routers because right now we only provide advertising so the L3 router doesn't learn anything. First, we have to achieve the learning of the routers that come from another router. And OSPF and ISIS, we think that are very different protocols that in so many ways, from concept point of view and also from workflow point of view, so it has been impossible for us to define an API that could evolve any kind of routing protocol. So we're afraid that if someone pushes for SPF or ISIS, he will have to start from scratch in another extension. If someone finds the way to put it together, we're happy to listen to him. And we're also pushing for route policing support when you configure the BGP speaker. And these are the resources, the current patches of the development. And this is how to get involved with someone who wants to rebuild patches or wants to push for improvements. We are happy to listen to your ideas. And I think that's all. Like to open it up for some Q&A, we've got a few minutes. I hope that doesn't mean that we have to explain it so fast. Mr. Rodrigo, I'm from Brazil. And how many routing agents do you plan to support to get high availability? You are planning to do routing reflectors. And these routing reflectors talk with the infrastructure? Yeah, actually, for HA, our initial design right now is to keep it very simple. So we support like, OK, you have your DR scheduler. So one is to one mapping could be there. For each DR scheduler can have one DR agent. But yeah, you can. I think we are extending the support, right? So multiple DR agents could be there in the scheduler. So if you talk about HA, so right now, we suggest like one is to one mapping could be there. So for each DR agent can have its own replica and working in hand. So how to do it is like, OK, you can have your manual configuration coming up. So when you spawn your BGP speaker, you can tie the same BGP speaker with the two DR agents simultaneously to ensure HA for the time being. Another way to think of it is a BGP speaker is like a BGP process. So we want to be able to replicate the BGP process on any number of agents through the scheduler. So that's one of the HA solutions that we're looking at. Since the BGP speakers speak on behalf of other routers, if you have two speakers that are configured the same, that automatically works as an HA. Because you are not routing through these speakers. You are only saying the next hope of the route to the peer. So I think right now it's only a problem of a scheduling, not a scheduling, that's all. Maybe the performance issues and all like we need to measure. There will be one routing, one BGP session pertinent. That's there, I see that. But that's the idea? Well, no. Because if you have a lot of BGP session with the infrastructure hardware, you have another problem with the infrastructure guys and you can't scale. So the peering doesn't happen pertinent. The peering happens on a per agent basis. So if you just deploy one dynamic routing agent, you're only going to open up one peering session with the infrastructure. So it's not a pertinent peering. But I can implement two? You can implement two, yes. You can. Limitation on the BGP peers, you can have. Number of BGP peers? I don't think we've put any limitation on that. No. No, we haven't tested it yet. So right now no limitation. But yeah, we will be checking on that. Actually it depends on the driver. Right, it depends on the driver. On the implementation of the driver, it doesn't depend on the API. So it's quite difficult to answer. Right. And one thing is like it's all your APIs that admin space effects, so it's admin controlled. So it depends upon your topology and what you can support of. But as an architecture, there is no limitation. That's a good point. The APIs that we're exposing here are not tenant-facing APIs. These are admin-only APIs. So it's an admin who configures the BGP processes, the BGP peering, and associates the external networks with the BGP process to determine what routes are going to be advertised to the infrastructure. So these aren't knobs that we're putting in the hands of the tenant. So to maintain two BGP sessions, you would deploy two agents. So you don't have to place the agents with your compute node. So your agents can run on any machine you want. They can also be separated from the network node. And actually, I'd probably recommend that you deploy it that way. So BGP only works in the control plane. So that means that you don't need to access to any API, neutron, anything. You only need to put in the same network as the RabbitMQ. And then, of course, it has to be connectivity with the peers. But you can put it in any host anywhere. No, they will talk on behalf of the tenant, of the routers. Actually, you can do it. If you define each one of the agents as the peer of the other one, there's no problem. You can do it. You define the speaker, and you define the peers. And you can do everything you want with the peers of each agent. But the routers, they will advertise. So if they have the same tenant routers, it's going to be the same ones. Is there any plan for low-true VPN like BGP-EVPN? You know, EVPN can be used as VXNA control plane. So I think low-true VPN is useful. Is there any plan for low-true VPN? Actually, there is a work going on right now in the community, if you might be knowing, which is networking BGP-VPN, which addresses this EVPN configuration as well. So right now, our work, we have not thought about it for L2 VPN using EVPN or L2 VPN. So probably, we are optimistic about it. And we will have a discussion with the other team to see how we can converge and make it as a unified API for the user. I don't know if they are here, but guys from Orange are pushing for BGP-VPN. Yeah. I might try to put together. We're going to have a meeting of the minds here at the summit and try and come up with a common solution if we can. If you want to join us, I think it's going to be right after this. OK, OK. I will join. Currently, the BGP dynamic routing project is incomplete, right? And it doesn't support BGP-Listener, right? So if you are playing that, you won't support BGP-Listener. Only Advertise rules to external network. How about learning external rules? That's a great question. What we're focused on in Mitaka, just so that we can lay the groundwork for future development, is a very narrow set of use cases that only involves advertising routes. Now in the future, there's nothing that we're doing that would preclude us from learning routes from the infrastructure and having neutron consume those. But at the moment, that's not where our focus is. We feel like if we focus on a more basic use case to kind of lay some infrastructure, that we can come back and do things like that in the future. OK, OK, thank you. Any more questions? Hi. You told us that developing MP, BGP, it would be enabled. So the layer-3 agent would be working us like the P routers for the L3 VPN. So in the case, you are developing also LDP protocols, with the MPLs. OK, so right now, what we have thought is like a neutron can just stamp the routes, OK? Like, oh, these routes belong to this particular VPN, right? And what is RD and RT values associated with this? But in my opinion, actually this work doesn't have to do with MPLs. I don't have to learn the label then, then encode it using OpenFlow or something, right? So that probably I feel it's more from the driver's perspective. So I can give you the routes, and you determine what you want to do with it. So that is how I see things. But I might be wrong. But at this particular moment of time, we are just advertising route outside. So I'm minding that, OK, I can stamp the routes with some more useful information, which can help me to achieve more funcality. So not looking at MPLs right now? Thank you. I think we're out of time. Yeah, I think you're running out of time. Yeah, thanks, everybody. Yeah, thanks for your show. Thank you very much. Thank you.