 Hello everyone. Sorry for the technical difficulties. We got to do a little laptop swap there a little exchange of adapters and so forth I'm Paul Carver with AT&T So hello everyone, I'm Toma Mohan from Orange and I'm Tim Jonech from Ericsson So we're gonna talk to you about the networking BGP VPN, which is a neutron stadium project and the scheduling of the summit kind of didn't work in our favor. We are doing a sort of a more entry-level version of this talk tomorrow at 1215 I think. So if you don't know what MPLS is, don't know what BGP VPNs are, we aren't gonna explain it now We'll go into a little bit more depth on that. So this talk in the telecom track is assuming you know what all that is So let's go to the next slide. So we have in the in the NFB space a little bit of a history here When we started using neutron a couple years ago There was just the network. It was all about attaching VMs to the network You had a public network. It could be the internet. It could be an internal network, but that was about it So a couple years ago So a couple years ago this project kicked off to introduce MPLS and and BGP VPNs to neutron and we We wanted to do multi-tenancy. We wanted to do SDN controllers. We wanted to be able to extend neutron networks into the wide area network. A lot of the the telco applications are built with the use of VPNs as a sort of an intrinsic component. We use networks for signaling, networks for OEM and we needed to be able to Attach virtual network functions to a variety of different VPNs. So let's go to the next slide Well We can cover some of the early early work on this in terms of Nati from NTT and Pedro from from Contrail and some work from our friends at Orange and there was sort of a communications difficulty with them with the neutron community just explaining what we were trying to do and with the introduction of the big tent and the the neutron stadium that kind of opened the door to Extending a this capability into neutron. Now. Let's go to the next slide. I think That's skip one Did I skip? I think so. There we go. Okay. Yeah, all right. Okay, so the the beginnings of this this project were to extend neutron to have a new API for dealing with BGP MPLS VPNs and there were a number of us interested AT&T orange Ericsson Cloud Watt and others I think we had a fairly large gathering at the Vancouver Summit of people who were very interested in BGP VPNs, but Didn't really know how we were going to achieve this and so this this sub project came together networking BGP VPN and it was part of the neutron stadium and Introduced an API a reference implementation and a driver model and Next slide. I think thank you. I'm trying to move it along quickly since we lost some time there. Yes. Yes So Sorry for the confusion about the technical details today So looking at the project today so we did a Which is actually our third release. We did a Newton release pretty much in sync with the the Newton release of other Open-Sack projects. So we have a consistent set of feature of base features We have an API that allows the definition of a layer 2 layer 3 VPNs only layer 3 VPNs are actually supported today by The drivers we have but there is already work in progress to support a layer 2 as well We do have support for associating networks to BGP VPNs and routers to BGP VPN This is something that I will explain right after we have support for the neutron CLI We have today drivers for Different SDN controllers and one reference drivers that's aimed at working with a meant to open V-switch the the neutron reference drivers So there's the supported asian controllers or open a light open control and new networks and we have also Important additional features in particular we have full heat bindings That have been contributed to the project we have a horizon GUI that allows to control the most important parts of What the API allows to do and we also have a tempestude covering the API tests so What we do in the networking BGP and project is that we we add into neutron a service plugin that allows And I will explain the API operation that are possible that allows admin and users to do API calls to define BGP VPN attack connections. So when such API calls are done Then again click Then the driver will delegate the work to a back-end so back-end can be whether Neutral plus a driver that's called back pipe Where are open daylight or open-control or new age? To do the actual work required to set up the interconnection with the BGP VPN Which consists first in exchanging BGP VPN routes So sending advertising routes to the outside to BGP peers that are typically IPMTLS routers and consuming the route advertised by these routers and on the other hand Configuring the data plane so that the MPLS traffic between the VMs and the VPNs can be carried The data plane being typically the virtual switch or the router depending how it's called Okay So and again So looking at the the specific instantiation of this architecture in the case of an SDN controller When an API call is done the work is delegated to the SDN controller that will do well the actual operations Will depend on the SDN controller? Click again That will do the work It has to do to advertise the routes to the outside consume the routes from the outside the BGP VPN routes and Configure the virtual switches through through the southern interface that it's that it's using and at this point Traffic can be exchanged between VMs and VPNs as MPLS traffic. Click So this is a setup that we have most typically when the driver used is a driver for an SDN controller and the setup is slightly different click when we When we look at the the what we call the reference driver because it's a reference driver that aimed at working with the neutron reference drivers And that's light enough to run in the in the in the open stack CI so when this driver is used well the starting point is that we This river is meant to be used when the open v-switch ml2 mechanism driver is used So when an API call is done The same kind of messages are used between Neutron server and the agent running on compute nodes So that the the compute nodes have all the information on the BGP VPN interconnections So the the information on this is their connections is passed via an extension of the open v-switch agent To the component actually having the implementation of BGP VPNs, which is called Backpack BGP Which was made open source by around a few years ago and this component will be in charge of exchanging the BGP VPN route and Configuring the data plane and here in this case The data plane that is configured by the the the BGP VPN component is an additional bridge That's added additionally to the existing bridge Already defined already used by the neutron reference drivers So this is the underlying architecture to deliver a service and it's interesting to see which API Constructs we have introduced to allow Different entities to define what they need to define to create BGP interconnections so as you may know the BGP VPNs are set up and created by the Operator of the infrastructure the operator that manages the shared network over which the BGP VPNs are constructed So it's typically the OpenStack admin that will give access to a tenant to a specific BGP VPN and to do this The admin will create a BGP VPN Object in the API that will have the technical details of this particular BGP VPN And we'll give it to a specific project to use now and a user in this project which has already existing neutron resources such as a network or router can Use additional object to create the interconnections the actual interconnection that he will Create on-demand Between his networks or his wrappers and these BGP VPNs So these objects to create interconnections are called the the network and router associations So these are all the new API resources that is that are introduced by a networking BGP VPN So the key is really here to to see the distinction between what the admin can do and what the tenant can do That's really the key part of this API, which is to Allow the tenants to create on-demand connectivity based on an object that's still controlled by the admin Do you want to do this? Okay, so With that we come to what we are doing in this context in no PNNV As you have seen on the previous slides no matter which particular setup you choose You need to integrate a couple of things which are not part of the Nina open stack and That's basically what opna v helps you with because it is a what we call a midstream integration project which does mainly two things it does automatic installation of a Vanilla open stack base system plus the components that you need in addition for a particular use case And it provides with automated end-to-end testing of all these components that you pull together In order to make sure that it always works Because what we're basically dealing with is Multiple communities working on things in parallel and it's pretty obvious that occasionally things break So BGP VPN as you have seen is such a use case and the nice thing about this is that it gives all the upstream communities additional visibility if what they Do in in their particular context breaks things in the system context And there is one particular project within opnfv Which I'm have the honor to to lead and that's the SDN VPN project Which aims at integrating a complete stack for BGP VPNs The focus is on scenarios where SDN controllers are used as Tomas showed, but there is a Planning at least for having the neutron backpipe scenario as well so So how does that actually work so in The way this works in opnfv is that there is a couple of What we call baseline scenarios and on top of those baseline scenarios we have Added the possibility to deploy the BGP VPN API extensions the service plugin and the heat extensions Activate the relevant features in open daylight in order to be able to run VPNs and do all the necessary stack Configuration and that has been integrated into the supported opnv installers fuel and apex We have different scenario different scenario flavors HA and non-HA and we can deploy this either On bare metal or we can deploy it as a nested setup in all on one host in the form of VMs which will then contain additional VMs So a little bit more on what opnv deployment scenarios are a Deployment scenario is essentially a specific stack configuration Which you can automatically deploy with an opnv installers and that can that gets routinely automatically tested in opnv CI So there are a couple of baseline scenarios that the installer projects themselves are maintaining There is one scenario, which is called no SDN Which is essentially vanilla open stack plus OBS and neutron agent and then there are two flavors for ODL One where ODL only takes care of L2 networking and another one where it also takes care of L3 networking And the SDN VPN scenario is derived from this L3 scenario where ODL takes care of everything So and now we're getting into a demoing how that actually works. So how can you do that? It's actually a pretty simple way of getting a running system together We're gonna show it at the example of fuel if you're interested in doing the same with apex come see us We can basically give you a rundown on that on that as well Before I go into the actual demo There are a few things that I have kind of done before which are not going to be visible in the video Which is we have already set up a VM which contains fuel We have set up a number of additional VMs which will mimic our compute nodes Which have are running and detected by fuel already and we have done Linux bridges for the for the underlay and What we'll see in the demo is we will run a quick check on the on the fuel GUI to see if all the plugins that we need are in place and we will create an environment activate the right features within the environment and we'll deploy it and That will I will fast forward that because that takes some time So it's it gives you the opportunity to have some popcorn in between and then I will give you a brief rundown on what? What you can actually do with the system once it's deployed so Can we go that? Okay, and just run it So first thing I'll do is Is it running? Okay, so a couple of VMs are there fuel master one controller two computes We have a bunch of underlay networks already configured one default network and four additional networks There's Interfaces of the VMs associated to to those networks The configuration looks like that. We have like one admin network. Can you stop that for a second, please? I did So we have one admin network where basically the nodes are pixie booting from the fuel server We have the the public network and then we have like private networks for having the controllers be able to talk to the compute nodes and The compute nodes having there something that is called internal transport, which is a feature by ODL Or a feature of ODL and then there is a sort of the usual management network We can continue So All right next thing as we go To the GUI off fuel, which is a web GUI that you can access through your browser We will First check the plugins. There's three plugins that we need to pay attention to the first of all the BGP VPN plugin itself Which does the API extension deployment? And and all the related things we have a particular version of open vSwitch that we use DPK support is what we are looking for here And then we have the open daylight plugin that deploys the open daylight controller and we are we have put functionality in there to activate to install it the way we need it so next thing we do is we do we create an environment we give it a meaningful name and And we go next and We can basically we need to set a few options This one can stay as it is and then we need to select neutron with Vlon segmentation Sorry neutron with toning segmentation. We need to put in and And then all right, I think I originally thought I'd say more this can stay as it is as well Also these settings there there's additional stuff that you can put in but we don't need it for our deployment here So now we can create the deployment or the environment and now the next thing is that we need to Get nodes into that environment So well the first thing that we do is actually to go to settings and Activate the actual features so based on the few plugins that we saw originally which are available on that fuel server We now have to tell fuel that it should actually deploy those plugins in addition to the base system So we get the vSwitch deployed we get open daylight deployed the BGP VPN extensions are grayed out as long as the BGP and plugin is not selected and Once you save that you can actually select those extensions as well That's a little bit of a pitfall important to be aware of that sometimes users get confused at this point so now we save that and Go back to our Our node configuration and then now we have the the three VMs that we can deploy to And we will make one of them the controller node Meaning it's the base controller the open stack controller and also this one is going to host the open daylight controller And now we have two nodes left that which we will make compute nodes, so we will give them the compute role Apply the changes and now we're ready to go Sorry, I forgot about one step. We need to configure the networking so We have our four Networks and we need to associate the interfaces of the VMs to those networks So we have on the compute nodes Well, this is now done for one compute node And we need for the controller node and we need to do it for the compute nodes as well the Configuration there is slightly different. So we get the public network to the right Interface the management network and the private network we apply that and Now we can run a check if the connectivity is the connectivity we need to have And we see that there are and it's it's important to pay attention to the to the VLAN tags Which are given to those networks so that on the under on the other way the the traffic gets actually properly separated from each other and now we do the connectivity check and It will in this case succeed and Now we can go to the deployment So now all the software has actually pushed down to the nodes both the controller and the compute nodes So you see that now it says it's deploying and if you go to the nodes You can see actually a nice progress bar. This one is going to take a while. So if I Can we fast forward a little bit? Okay, now it's almost done. Okay, and now we Can see that it the system that we have actually deployed now you can create each PDP ends on it You can go to the list of VPNs and see that You can actually go to the list of networks take a UU ID of out of that list and Associated to one of the VPNs as Thomas explained earlier So you basically just have to copy the ID and then you See that it has been associated to the VPN and if you go to the to the VPN display And and see the list of associated networks. You'll find that that network is actually there now Yeah, and I think that's it. So I think we have one last slide, right? You want to take that one short conclusion We are right on time. I think nearly so As we hope you understood This is about one API one API to allow tenants to control Their interconnections the interconnection between their resources on open stack and they'll be GP VPNs So there are multiple use cases behind this one of them is a kind of traditional case of public cloud operator that needs to interconnect business customers having Well being customers of MPLS VPN offers another one is inter DC which can apply typically in distributed cloud context or at the edge of the cloud and The third one which is pretty important these days is the case of NFV multiple deployments when you need to have a Multiple pops hosting NFV workloads being connected together So kind of a takeaway is that this project has progressed pretty well in the in the past year and a half it has multiple drivers for the Key SDN controllers Having this ability to Interconnect with BTB VPNs and it also has an implementation in neutral We have different interfaces to interact interact with this the CLI Horizon gig UI and heat bindings and if we look further down the road We will work in the next release is on completing the e VPN part of the API and the bindings in drivers will happen In parallel We have remaining work to do to match the neutron stadium requirements in particular having more functional testing We want to work on the solutions of the API for finer grain control of routing study grounds preferences and route leaking for instance We will also consider supporting multiple drivers backends simultaneously typically to end all transition and transition scenarios between the different backends Something which is not in the project which but which is very relevant in the context of this project is something that is Close to land we hope in open v-switch Which is a support of the MPL as during a calculation and next we hope MPL is over UDP in Open v-switch, which is a component used by many solutions Last point I can mention is that we we we expect Drivers and backends to evolve toward better feature parity today We have some drivers that support that don't support all the options all the alternatives provided by the API and Last point Which was which is something that we want to highlight is that it's a project that really illustrates how a Project inside open stack in particular in neutron can work hand-in-hand with the project in opnv and here we really experienced the fact that opnv was a Providing lots of incentive to complete the different building blocks and actually doing the work to actually Validate and consolidate how they they can be installed together and tested together and If you want we are available for questions. We have a few minutes Well, we'll we'll do like if we have a few minutes for questions. Any questions none All right. Thank you. Thank you very much. Thank you