 Hello everyone, thanks for coming to this talk. So this talk is about interconnecting Neutron and network operators BGP VPNs. So this is about Telco stuff, so don't be afraid, we'll take care of explaining to you what this all means and what this is all about. So first of all, what are BGP VPNs? So to explain this better, it's important to avoid the confusion with the other kinds of VPNs that you probably know, TPP IPsec VPNs or SSL VPNs. So what we call BGP VPNs are pretty different. They have no encryption built in. The P in these BGP VPNs stands for private and you should think about private addressing then for isolation of scopes and avoid overlapping issues. And one can still obviously add encryption over a BGP VPN, it's just that it's not built in like in an IPsec or SSL VPN. And a second important difference is that the isolation provided with BGP VPNs is not managed by the customers using the virtual private networks. It's managed by the operator that operate the shared physical network, the shared physical infrastructure. That makes these BGP VPNs very different than IPsec or SSL VPNs. So technically, we want to explain a bit. This is a very simplified one slide explanation of how BGP VPNs work. If you want to know more, you'll have to dig into more serious documentation. So to allow having multiple overlapping networks, you need to have a data plane encapsulation or data plane isolation. And in BGP VPNs, initially, IPLS, and it's still what's largely used, MPLS is used to isolate the traffic of different VPNs. It's not the only thing that you can do with MPLS. So again, if you want to know more about MPLS, you'll have to go beyond what we explained here. So what you can just retain is that we use what we call an MPLS label to distinguish the packets on the wire between the different VPNs. Then we need a control plane to send traffic to its destination in the context of a VPN. And here, the protocol that is used is BGP, more precisely, VPN extensions of MPBGP, but it doesn't really matter. And here what we do is, perhaps, you know, when BGP is used for the global internet, we advertise a prefix and say that it's reachable via a router here and with additional properties. Here in BGP VPNs, we advertise a prefix in the context of a VPN, which is identified by what we call a root target. And we say that it reachable via a router using this, such or such, MPLS label to distinguish the traffic from the traffic of other VPNs. Again, this is simplified. What we call a root target is not strictly speaking a VPN identifier. It's much more flexible, and you can do many different things with this. But this is the short summary. These BGP VPNs were invented some time ago, initially for L3 VPNs, and they were later extended to cover Ethernet VPNs. With various flavors, the most recent one, which is called EVPN, being the one having the more potential today. And later, these extensions have been generalized to support not only MPLS as encapsulations, but MPLS over GRE, MPLS over UDP, or VXLAN. So this is the reason why we call them BGP VPNs rather than the historical name of BGP and MPLS VPNs. So these VPNs are kind of old. They are as old as Ethernet VLANs were invented in the late 90s. They have had incremental improvement since. They have lots of deployments in particular in telcos, and they are very interoperable. There are a few IETF RLCs that describe them, if you have on this slide the reference if you want to start in point. And you can be aware that multi-vendor deployments are commonplace, which illustrates the level of interoperability. And they are also very scalable. For instance, large operators like O-Range, RNTNT, and others have BGP and MPLS deployments serving millions of VPN sites. This gives an idea of the scale that this technology can achieve, thanks to a toolbox of established practices and protocol extensions to improve a scaling. And now I'll let Paul follow up on the topic. By the way, we didn't even introduce ourselves. We did not introduce ourselves. So I'm Thomas Mohin, working for O-Range. Paul Carver with AT&T. Tim Ernich, working for Ericsson. So a little bit of history back when dinosaurs roamed the earth, telcos sold private lines. And then we came along with Frame Relay and ATM, which were totally non-IP protocols. And as IP became more and more prevalent, we started migrating our core networks to IP. And then we realized that customers still wanted the private kind of connectivity. So a lot of your companies may very well be using MPLS VPNs without you knowing it in your legacy IT, because what happened was AT&T and the other telcos migrated the backbones to an MPLS backbone and would deploy customer edge routers to the customer that would provide multiple VPNs. A customer wants to use, for example, private addressing, or they just want to be able to transport traffic separate from their internet connectivity. In many cases, the voice over IP is a common use case, where there would be a customer premise piece of equipment that will use an MPLS VPN to attach the voice over IP phones and also, for example, provide internet service and keep the two separate. The telcos also began using MPLS VPNs internally for our infrastructure. So in order to interconnect our various components of our network infrastructure, pieces of the mobility cell phone networks. So there's a number of use cases where MPLS VPNs are deployed since the 90s, as Tamay has indicated. And as OpenStack people, you may be more separate from the people who run your network. So you may not be aware that in your data center, you've got OpenStack networking with Neutron, and at the edge, somebody is managing a physical router that may very well be interconnecting your sites. So what we're talking about here is bringing that knowledge of your MPLS VPNs into OpenStack, whether you're a telecom operator or whether you're a customer who is purchasing MPLS VPN service. And if you, for example, want to attach multiple of your same data centers with their separate Neutron networking to the same wide area network, we have the ability here to interconnect your Neutron networks with your pre-existing MPLS VPNs that you may be purchasing. So the basic components of this are, it's broken down into an admin and a tenant portion. And so the admin is the person who knows what MPLS VPNs you have available. And these are identified by something called a route target. And so the route targets that you have available to you are sort of, they're usually handed to you by someone. So you bought an MPLS VPN to interconnect your data centers and maybe you have another MPLS VPN to connect your remote offices. And maybe you're purchasing a IPsec VPN service that comes with IPsec termination on the carrier's network that is then interconnected within the carrier's network to an MPLS VPN that delivers to your data center. These are just examples of services that you may already, that your companies may already be using. So those route targets are sort of outside data that your network administrators would be aware of what MPLS VPNs are identified by which route targets. So the admin in your OpenStack cloud can then create a BGP VPN object that is mapped to the route target and hand that object to your tenant. So the tenant can then create Neutron networks and associate those networks or routers to an admin created BGP VPN object. So the admin gives the BGP VPN to the tenant and then a user within that tenant is then able to associate a network or a router to that VPN object. So when this happens, the API calls come into the BGP VPN service plugin and then there's a set of drivers. We have four drivers at this point in time. We have the reference driver with bagpipe. We have an ODL driver, a contrail driver, and a Nuage driver. And so the API comes into the service plugin and then that triggers the back end to perform this association between the Neutron network or router and the BGP VPN which is then mapped to a route target which then connects out. So the BGP peers in this diagram, in a typical use case, that would be an edge router. That would be a router that connects out to the wide area network so that it could be a virtualized router but it might very well be a hardware router that has a fiber going out to the wide area network and it's carrying a multiple MPLS separated VPNs. And so the back end would advertise the routes for your Neutron networks out to your wide area network and allow you to attach your VMs to the Neutron network and have the packets carried over MPLS out through this gateway across the wide area network. Take this one. So in the particular case where an SDN controller is used, the BGP VPN service plugin will use the driver for the specific SDN controller and, well, of course, the details of how things are done inside the SDN controller can be different depending on the controller but typically you will have another interface that will be a REST API. This will be used by the driver to actually pass information to the SDN controller. And then this SDN controller system will have a BGP speaker able to advertise and consume routes exchanged with BGP peers. And the thousands of SDN controllers will be used to actually configure the data plane on the virtual switches to actually forward MPLS traffic, receive MPLS traffic to the VMs and traffic from the VMs forwarded as MPLS towards the VPNs. The reference driver that we added to the project works differently, of course. Like other reference drivers in Neutron, it's not based on an external system that would be an SDN controller. So the one we added actually is designed to work in a context where you use the OpenV Switch Mechanism Driver as an M2 driver. We have worked in progress to do the same architecture with the Linux Brickstream Mechanism Driver. So in this case, when an API call is made to define an association, an interconnection between, for instance, a network and an external BGP VPN, the driver, which is called the BackPy driver because it relies on a component called BackPype, will use RPCs, typically over RabbitMQ, but of course could be something else. To pass the information about this BGP VPN attachment to the BackPyBGP component that runs on compute nodes, and this is done via an extension added into the Neutron OpenV Switch agent. This BackPyBGP component is responsible for advertising BGP VPN routes, receiving BGP VPN routes and configuring the data plane accordingly, meaning when you receive a packet for which you have a route to a destination which is an MPLS route, add all the information so that the current traffic will be actually encapsulated and sent on the right interface or tunnel. So to do this, we actually add a bridge to the existing bridges on the compute node, bridge called BRMPLS, which allows to segment the roles between the different components. The Neutron OpenV Switch agent is only responsible for the BRINT and BR2N bridges, and the BackPyBGP component only responsible for BRMPLS. You don't have a risk of inconsistency because the life cycle of the different agents are different. And then the traffic is carried over MPLS towards VPNs. So we have a demo. So in this demo, what we will show, the starting point is, I assume you have an OpenStack deployed. So here it's a single VM DevStack. You have an OpenStack cloud that is interconnected to an IPM-PLS1 using BGP VPNs already. So you have a pre-existing customers with pre-existing VPNs in the one. And both are glued together with BGP MPLS routers, IPM-PLS routers having the configuration in place for the BGP VPN protocols, but no per customer, but not any per customer information. So the platform used for this demo, we have a DevStack VM using the OpenV Switch BackPy driver that I just described. And we have a lab router which is running the VM as well, and a VPN side which is emulated also as a VM. So what we will do is interconnect a virtual machine of tenant Red to the VPN Red of the, maybe you have, let's assume we have an enterprise named Red, it's not a fancy name. And we'll just show something very spectacular as you will see. Sorry, let me try to switch it on. Okay, so initially if we look at the routes present on the router to which the physical site, the physical VPN site is connected, we don't have many routes. We have only the routes for the one site and for the IP present on the router itself. So if we look at the admin interface on OpenStack, so we are logging in as admin on the OpenStack cloud, we see that we actually have a tenant called Red. And for this tenant, we will configure a BGP VPN. So what I'm showing here is the horizon interface for the BGP VPN service plugin. So here you have an example of a creation of a network for a tenant which actually is the demo tenant. So we just defined the control plane identifier of a VPN which is called a root target and we create the BGP VPN. At this point, of course, nothing is made, no association has been done. We will do the same creation of a BGP VPN for tenant Red using a heat template illustrating the heat binding that were added to the project. So using this heat template, we defined two VPNs for the tenant Red, Red VPN A and Red VPN B. And we also defined a VPN for tenant Blue, additionally to the one created via the horizon GUI. Now, if we connect to the same OpenStack using this time as a user of the Red tenant, oh, sorry. So if we look at the configuration on the router, the configuration on the physical router to which the physical sites of customer Red is connected to, if we look at the configuration that's router, we see the configuration of the VPN for customer Red and we see, sorry, switching between two pointers. So we see the same root target identity for your being used, the same one as the one that we have defined on the admin horizon interface. Now looking in as customer Red, we see the two VPNs that are defined for those customers. Of course, this tenant does not see the VPNs of other tenants. And again, using a heat template for a Swift demo, we create an association. We create a network. In this network, we spoon a VM and we associate the network to the VPN called RedVPN A. So this is pretty short. And what we see is that once the VM is created and booted, we can actually ping this VM from the customer site. So ladies and gentlemen, you're going to witness something incredible, very spectacular. As soon as the VM has booted, you can see an ICMP core request and the core reply. That was a bit too fast. So just to illustrate the fact that everything is API driven, we can actually disconnect, disassociate the network from the BGP VPN and at this point, the ping stops working and we can re-associate it and then look at the routes actually present on the IP, IP, and S router. So here you see the route advertised, let's say by OpenStack, by the different components deployed by Neutron to advertise BGP VPN routes. So you see the VM IP advertised to the IP, IP, and S routers by these systems. And we can also look at the troubleshooting interface on the compute nodes for the BGP components running on the compute nodes. And in this interface, we can navigate in the different elements to actually see from where the route was advertised. We can also see the MPLS label that was advertised. So this is the control panel identifier for the route. And the MPLS label advertised is this one. And we can actually look at the MPLS traffic flowing to and from the compute node and actually see that the MPLS traffic is using the label advertised. This is the traffic toward the VM. Of course, we have the same thing for the traffic coming from the VM but using this time the labels advertised by the one. And I think we're done for the demo. So what we wanted to illustrate additionally to the concept behind this service plugin and the use case behind it and how it's actually implemented is how we work with the Neutron community to actually make this possible. And it's interesting to illustrate the fact that it was, well, lots of small details but overall quite easy to actually found the different hooks for modularity, easy to use for the different components but there are actually many of them. And perhaps the slide is not fully exhaustive by the way. But typically we have, we're using the hooks to define extension to the Neutron API, hooks for loading service plugins and loading their drivers. Now the specific OBS Backpack driver we are using the registry callback, the Neutron registry callback to have notifications on creation of ports or networks or routers. For the integration on the compute nodes we are using the L2 extension, L2 agent extension framework that was added in the pasture. We are doing an increasing use of NeutronLib even though the movement from defining things in Neutron to NeutronLib is something in progress we are following it. For the CLI we are again using hooks called entry points and this work would be done in a comparable way for the OpenStack CLI. And we have a game plug-ins for heat, for tempest, for horizon and last but not least we are using different hooks set up by the infra team to allow a project such as ours or every project to define new jobs in the CLI. That is to say, so for some of these hooks or frameworks that we are using we had to work with Neutron developers to bring improvement or fixes. So it really showed that a project like ours has to work with Neutron developers to perhaps add things that were not complete yet or invent new things to facilitate integration like the L2 agent extension. But in the end we found that we had an hospitable enough environment to produce this as a modular project in the Neutron Stadium. And we also were, our life was made much easier thanks to the existence of other Neutron projects to take inspiration from. Now you have heard about the Neutron Stadium and the requirements put on Neutron Stadium projects to match the expectation of the Neutron community. So we want to say that it's indeed a significant effort that is required to match these expectations. So work is required in particular to get everything ready on the CI testing side. For us it had a downside in the last cycle we were able to push less features because of this work on these stadium requirements. But the good side of things obviously is that it pushes, it forces us in the right direction and having in particular better test coverage is something that helps a lot future maintenance and future work. But this work did not happen only in OpenStack in Neutron. It happened also thanks to a collaboration with OPNFV and I will let Tim explain this. Okay, thanks you. So OPNFV is something that we call a midstream integration project which focuses on two things. One is automated install of OpenStack based cloud environments for particular use cases. So there is a notion in OPNFV that is called scenario which essentially is a particular configuration of the stack plus all the needed components for a given use case and it also automatically tests these configurations in a CI framework quite often. Each scenario is run every couple of days. And as you have seen from the sort of the stack configuration for BGPVPN, BGPVPN is actually one of those use cases where you have to integrate a lot of components, some of them being developed within OpenStack, some of them being developed outside OpenStack. So OPNFV is actually the place that gives all the developers working in these different communities on the BGPVPN use case visibility if the overall system works. And if something breaks we usually find out very quickly because we consume changes from all the different places and put them together and see if it still works. And as a byproduct, since we need to do that for our own purpose, we are providing a relatively simple way of actually deploying a system that has all the different components and that is readily configured to actually run the BGPVPN use case. In the SDN VPN project within OPNFV which does this, we are right now focusing on those cases where an SDN controller is used as introduced by Thomas. We are planning to support the reference implementation as well. We have integration with two of the four OPNFV installers, fuel and triple O slash epics. And we have scenarios that are derived from the baseline scenario that OPNFV maintains OpenStack plus ODL taking care of L2 and L3 networking. And we have it in both an HA and a non-HA flavor, meaning in one case it's being deployed on bare metal in a redundant way and in another case it's all on one host in a nested fashion. So this is sort of the brief version of it. We went into a little bit more detail on OPNFV in our companion talk yesterday. So in case you missed that, check out the video from that one. There is more information on what exactly we do in OPNFV there. And I think you have a closing slide, right? Yep. So as a takeaway, you can keep as a key idea that it's one API allowing tenants to control interconnections between their resources in Neutron and their BGP VPNs. The use cases behind this are the typical public cloud operator when it's run by a telco that needs to be interconnected with business customers having MPLS VPNs. The other use case is InterDC, distributed cloud and edge cloud. In these cases, BGP VPNs can be used as a tool to interconnect data centers. And a use case close to this one is the case where you have multiple deployments for NFV where you need to interconnect POPs. This project has multiple drivers for several SDN controllers and a Neutron driver. It has different bindings that you can use to interconnect with it, the CLI, horizon and heat. And we have various evolutions on the radar that we can mention. We plan to complete the EVPN part of the API and bind it to the different drivers for with the work on EVPN is in progress. We have remaining work to do to match, as I said earlier, Neutron stadium requirements in particular on the testing side. We also plan to evolve the APIs for finer grain control of routing, typically static routes playing with BGP preferences and route leaking. We will also consider supporting multiple drivers and backends at the same time to allow migration scenarios. And one thing that we can mention which is a bit orthogonal to this project, which is important for this project, is in a typical scenario, the encapsulation that you will want between a compute node and, for instance, a net router is MPLS over IP, whether MPLS or GERY or MPLS over UDP. And for this to be possible, we depend on having the feature line in OVS for many of the DSDN controller that we are supporting indirectly by this project. So typically, this is something that's needed for the OVS backpipe driver, but also by the Open Daylight and other drivers. So this is something that's been in progress, a work in progress in the past month. Actually, there was a talk by Simon Homan yesterday, which was about the foundation work to make this possible. So we expect this to land soonish and then see the work on MPLS over UDP arrive at some point. Last thing that we can mention is that we have an expectation to see an improved feature parity among drivers. Today not all drivers support all the different types of association, for instance. So this is something that we would like to see improved. And the last thing that I want to highlight is the fact that such a project is, I believe, we believe, a good illustration of the way the work can be efficiently led across OPNN and OpenStack. Here we actually experienced how OPNNV facilitated and triggered the work on components around the solution, typically on installers and testing. And that's it. Do you have questions? Pretty interesting presentation. A couple of questions. What is the OpenStack version you're using for all this testing? In the context of OPNNV or in the context of CI testing in OpenStack? In the context of basically the demo that you showed, what is the version that you used for OpenStack? The demo was made based on our new turn release. And so the CI testing that we are doing is typically based on the master branch on which we are working. But we have releases for Liberty Mitaka and back ports. We had back ports. Less maintained today for Juno and Kilo. And the OPNNV work is currently based on the Mitaka release, I believe? Right now it's based on Mitaka, but in the context of the ongoing, the new release cycle in OpenNV we will be based in Newton. Yeah. Second question I have is regarding the data plane and the communication to the SDN controller. I'm assuming that you are using the specific SDN controller or you are using the OpenDayLight controller to the communication to physical router in terms of the data plane control. What are you using for the testing? You mean in OPNNV when we use it in CI there? Yes. It's OpenFP or this OpenDayLight controller. What are you using for the testing? In the testing that we do in OpenNV we use the ODL and ODL basically is like one big virtual router that spans the whole data center and it presents itself to BGP peers as one BGP speaker for all the compute nodes. That knows the routes of all the compute nodes. Other questions maybe? Okay. So, thank you for listening to us. Thank you. Thank you very much.