 OK. Mi je Marijana Perala. Je z AT&T Mobility. Vznikam, da imam tudi, kaj imam vse, kako je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, kaj je vse, Straight away? We already talked about some networking limitation in default neutron of ES ML2, which actually in my opinion prevents doing really scalable service chaining and then I'm going to talk to you about what other service chaining requirements and then how MPLS, VGPP pens actually can implement service chains, tudi je tudi veliko technologičke zelo, kaj je zelo, kaj je zelo, kaj se vseh čaljene, začači, tudi začači, pažne in izgledačnje aplikacije. Tudi, ki ga me je vstajeno, ki je zelo na 3 zelo, načo se pričo bila, da pa jaz bo izgledačne začalje v klasi, da ima dobro izgledačne zelo,oths Amancream Networks. And typically there overlays over some scalable IP-fabric solutions like closed design. The second is SDN, software defined network, which basically provides programable forwarding planes. And the third one is the standard based routing technologies, today implement in data centers and overlay networks. above parlance and Mustafa Grannh, and this is all actually possible because of how Neutron was designed. Neutron architecture actually is very flexible and extensible and mainly through the plugin mechanism and its APA extensions, that allow different technologies to be implemented and so it allows for innovation. Plaggin, nekaj nekaj pump applications. U čem je je tajite, narediti v sostringi, z njegi todej pasaroudare je healthy voltage gene v Žutnjopozen. V nek newspapersu poz США,čok na settling enečnosti, poradj Victoria, tr seeking drives in Yarobnoorki provakonentsi. Svoj je, v efektorji v vzeličniht, kaj je bila vzelaženje, če je obočenje, vzelaženje, če je monolitica. Kaj pogledaj smo tudi svojo strategija, kako je, ki gavimo, ljudi vzeličnje, tezvrši technologi bo način z vzeličnih vzeličnih. Vzeličnja je začina zeloženja, nekaj je vzelaženja, nekaj je vzelaženja. Vzeličnja je modulina načinje, kaj bi te elementov jednak na njih ax. Vsi je komod navajjevanje finanic komod ne vokro. Štef je, da je zformučujem na stranke in ni zformučujem. Ca bolj ni izformučujem zformučenje s končnimi ob воспustenimi prihudimi, če je neki napomsk. Štef ni, da ni je tukaj tukaj tukaj svojimi, neko prejávanje. Am listem fri vči pred Paciferike, ali še v Hogwartsu ispajen. Stavlječno je bil, da ne gelšč Gang Neutron gledačno je plagič, kaj je zapravetori vo linux globne serve. Skupne je zetka, ali stavlječ je ne zetka, in pa je zetka, ne bo začetila vse izgledačne zelo. Second is that routing is limited to static routing right now. This is being addressed, because dynamic routing is being introduced into Neutron. However, it's still in the context of Linus virtual router. So the first problem is still not being addressed. And the third one is because of the fact that the routing is not a core feature. The load balancing, ECMP load balancing is also not a core feature. And it is also problematic because, of course, there is load balancer as a service, but load balancer as a service does not have infinite capacity, right? Eventually you have to load balance to the load balancer, right? So you need some kind of a core load balancing functionality. And I kind of go one by one of those limitations, and then hopefully I get to how to solve, how to resolve those limitations. So I kind of say what is needed and why Neutron, what Neutron doesn't do. So default Neutron, I mean. So cloud applications require optimal and controlled connectivity. So optimal connectivity is basically that you would like to traverse from network to network without going through the middle virtual router. And even with DVR, with distributed virtual routing, you still, one still has to go, you know, the traffic still has to go through that router, right? It's switching and it's routing. The routing is not native. And you saw, I was in the talk yesterday, the DVR talk, and it's implemented, but it's quite complex. So it's still the Linux appliance is still there as a router, and it's very complex to actually do east-west routing. So, of course, another limitation associated with lack of core routing is that today cloud applications are deployed and managed as a service. So they would like to, you know, those services, owners of the services would like to define and control their access policies. OK, this is not possible today with default neutron router because you can define the policy only from the exit of the client network, right? So it is an application or client of the service, and when the policy is applied, it's on the router at the exit of this network, right? So basically what it means that the application, that the policy definition is equivalent to its implementation. And this is not what is needed in... This is very limited. Second is lack of dynamic routing. This is much more obvious, right? And I think that in general community agrees with this limitation and it's being addressed, you know, typically it is explained as in terms of resiliency, right, that you lose one path to external network, then if you have dynamic routing to external network, you can failover. But also with lack of dynamic routing, it's very difficult to extend the networks across data centers. Or it's very difficult to preserve the virtual network context across the WAN. So that's especially true for private clouds where enterprise customers already have virtual networks in the WAN in the metro network, and they would like to extend them into the data center. It's difficult to do if you don't have dynamic routing. And the second piece of this lack of dynamic routing is that cloud tenants want to be able to dynamically plug in services between networks, and you will see, I have more details about it, but you need dynamic routing to do that correctly in a scalable manner, and in automated manner, you need dynamic routing. So default neutron doesn't have dynamic routing right now, and there is no generic service insertion support also. The third one, which I mentioned before, third limitation is lack of integrated load balancing. So network services, which are deployed in virtual machines, they require horizontal scaling. So the traffic has to be load balanced to those multiple instances. There could be tens of hundreds of instances to which the traffic has to be balanced. So the notion of having integrated load balancing is crucial. So these limitations and solutions are pretty obvious. So if you have only default gateway routing, it's not sufficient. We need to have fully distributed routing, as we have today in neutron fully distributed switching. There is static routing, we need dynamic routing. No build in load balancing, fully distributed load balancing. And actually if you do the fully distributed routing correctly, then load balancing will come for free. And all those three elements, aspects, actually are very important. So these are the possibilities, which are needed to implement dynamic service insertion, also known as service chaining. And I just want to quickly kind of show you, explain kind of what is service chaining and how it relates to neutron. So service chaining is basically a form of network policy. So network policy defines connectivity, and also insertion of network services. So service chaining is simply a form of network policy. Doesn't we need to reinvent a new construct for service chaining? Basically it's a network policy. And operators, for example, I work for mobility, so things which we want to do is that we, for example, this little service chain, very simple service chain represents so-called GI interface of 3GPP network, when your data call terminates on the P-gateway, packet gateway, mobility packet gateway, and then needs to be processed. And one of the processing examples is port 80 traffic, HTTP traffic, which typically goes through ADC load balancer, then goes through HTTP proxy, and then firewall node function, and then to the internet. So this is kind of the level of how operator would like to express the policy and would like to get some magic that this policy is implemented in the OpenStag cluster. And that's where the OpenStag and Neutron plug-in come into the picture, because the service chains is composed of Neutron networks. I just kind of drew a little picture how that here on the left-hand side you have an abstract representation, on the right is kind of the implementation of it. So we have some kind of overlay networks which are stitched together in the way that the traffic from the source, which is a network attached to the P-gateway, and the network attached to the internet that is external network, is adjusted in such a way that the traffic goes through those service appliances. And so we translate and program such that this happens in the actual physical network. So it will go from what to how. Now, why routing and dynamic routing and distributed routing is so important for service chaining? So here I'm showing a very simple example that we have two service chains, right? So we go from network A to B, the traffic flows through appliances as one, as two, and as three, and they may be on multiple instances, and which I'm showing, you know, some have two, some as three, you know, could be tens, could be hundreds. And then on the other service chain we have something else, maybe different appliances and only two of them. So basically the point of service chaining is that it has to be a way to communicate, clearly network A has to know how to get to the prefixes in network B and the same, you know, to the network C. And network B on network C could be tenet network, so they could be simply attached to some VMs or they could have, or maybe some prefixes are injected through those networks. So for example, network B could be a network attached to Internet and it's learning some prefixes from Internet. Maybe network C is attached to some VPN in UN, and it's learning some prefixes through this network. Oh, but it could be also that network B is simply a bunch of VMs sitting on this particular subnet. Very often, like, for example, in mobility the mobile pool are not addressed of the p-gateway. Mobile pool are sitting on the p-gateway and they somehow have to be injected into the open stack routing system. So there is definitely routes which need to float from ingress to egress and vice versa. So in this particular case, network A has to know the prefix to network B and somehow those routing of those prefixes in this case 20.0.0 slash 24 has to be such that the traffic is flowing through this particular chain. The same for network 30 that the routing has to be adjusted in such a way that when you leave the network A the traffic is attracted to the lower chain and when it leaves the chain it actually can be forwarded to this destination 30.0.0.5. So this is what means to integrate the service chain with routing. Without integration this cannot happen automatically. You are not going to go around and configure some static routes everywhere. It's simply not possible because new prefixes are injected all the time. So it is a dynamic scenario. So that service chain itself. Now the second one is the virtual appliances in load balancing. So here we have situation when services are deployed in multiple instances, multiple VMs. So for example service one has two VMs, service two has four, maybe service has three VMs. And to scale horizontally automatically that way we have to have load balancing at each hop of that chain. So that has to be load balancing at the compute host at the source network which is network A and then at each network which connects consecutive hops. Because imagine if you would like to do load balancing just from the source network the number of combinations you have here is just combinatorial. You cannot do that with embedded load balancing in the product. And of course load balancing has some additional requirements which are basically flow based because to preserve traffic flow in case of the instance failure the load balancing have to be flow aware and also for appliances which are stateful such as firewall the load flow base have to be symmetric. So the solution has to preserve flows both directions because of the state. The third thing which I wanted to mention is that what is connected with load balancing is so-called high availability aware load balancing. So very often appliances are running their own high availability protocols on some private network. So in this particular case I chose the second service service two which has four instances it sits on some private network and is sending some health checks those instances are sending health checks to each other usually the application level health checks and then if one of them fails one of the other instances takes over the failed one and somehow that information has to be communicated to the network to the network which sits in the front of it which is network C in this case. So for example if the instance 4 fails here somehow it has to be communicated to the network C not to send the traffic to that instance anymore and this has to happen dynamically. So that particular instance has to be removed from the ECMP set at the network C and this also has to be done through routing because it's a dynamic event. So I kind of show you three limitations of current default neutron of ESML2 and also how to solve them and you can see that basically how to solve them is to implement fully distributed routing as a core neutron function as today's switching is and then load balancing will come for free and also the dynamic routing also is something that is already happening in neutron. So actually there is already technology which does those things and it is called MPLSBGP VPNs it is technology which exists for quite some time so it is very well tested it scales very well and it can address all those challenges of service insertion and just networking as such in OpenStack between the OpenStack cluster beyond the OpenStack cluster across OpenStack clusters and obviously reusing something which is already there and is used is much easier more effective, economical than defining new protocols which takes time and effort and also they have to be tested it takes long time so standards don't mandate implementation so it's only the message format which needs to be met and so here I'm just showing how you do service training with MPLSBGPNs so MPLSBGPN has this notion of virtual routing and forwarding instance or table which is used for networking but the same mechanism can be used for the service training so it gives you immediate integration with routing and how it is done is that we have this in example we have this tools service chains I showed before and we have those VRFs which are basically distributed routing tables and they sit at the tena network so the ingress network on the left and the ingress network on the right and also at the service instances so when the prefixes are injected then the reachability is automatically should be automatically adjusted so the traffic is forced to go into the particular instances in a particular order and BGP MPLSBPNs they have three kind of building blocks to do that it's notion of route target which is basically define the isolation the next hop self basically changes the next hop it means come to me don't go directly from left but go to me and then label tells you so if you reach the service VRF on the compute you have to have a label to tell you go to the first green box not the second one so the label does that for you so basically this flow is the control plane flow so the traffic will flow from left to right but the control plane goes from right to left the same so that's for top service chain then we have the same situation for the lower chain and then traffic steering is basically an ACL which is applied in the ingress not at the egress of the network so as I mentioned to you before the default neutron does not allow you to do things at ingress you have to do things at egress and it breaks a lot of things so here ACL is at the ingress and now we have traffic flowing this way and some other traffic flowing this way so maybe ACL has port 80 traffic goes at the top chain and the non port 80 goes at the bottom chain and then we also have places when we insert load balancing which is basically at the places when we have a scale out services so the scale out is the green as two service so the load balancing is applied at the VRF at the previous hop the same for the lower chain when the load balancing is applied at the VRF instance before the scale out service file and then integration with HA is that it's applied at the places when we have scale out services so basically if there is any failure then the VRF is in form of this failure and then can propagate this information to the upstream hop such that the upstream hop can remove the fail VM from the load balancing set so basically here the building blocks is VRF ACL it's BGP multi path which will do service training for you immediately, you don't have to reinvent anything so applicability to a VPN is obvious I mean I am just listing some you know properties of MPLS BGP VPNs which you can see will be very useful in neutron so it's definition of VPNs is based on its policy based by definition its virtual interface may be a member of different VPNs this extra net construct it's support for traffic filtering it's BGP flow spec they are all standards by the way there is a proven scale so deployment with many millions of routes are common we in our network have multiple millions of routes there is optimal route distribution so the routes don't go to all the compute all the compute or VRFs don't see all the routes only see the routes which they need to see and it's done by the standard called BGP route constraint and support for scalable multicast and there is multi vendor interoperability for at least 10 years so the question to ask is one not to use this technology for the back and technology for modular layer 3 plug-in so this seems to be kind of low hanging fruit which could be used and is used but it is not a part of the core neutron so it's being deployed but not as a part of core neutron and here I want to list some of the standards and open source projects which define or use MPLS, BGP, VPN so it's two RFCs which are the base then there is ITF draft which extends that notion to the end system meaning to the compute then there is a draft which kind of describes it's an informational draft in ITF which defines which describes what I kind of explain here how the service is done and of course there is open stack it's neutron and then there is one open source implementation of this technology which is open control which is compatible with version 2 of neutron API but it needs of course API extensions to implement this technology today so this is I think I am finished, I think it was pretty yes, please no, no the BGP does not go to compute at all the BGP the question was whether the BGP has to go to VNFs I mean, BGP doesn't even have to go to compute but it doesn't go to VNFs for sure because that's a pictorial view so I'm showing here how the traffic flows but if you can see here maybe I go here it goes only the control plane is only touching the VRFs the control plane does not go to the VNFs you see those lines here, I don't have a pointer but if you see the lines RT plus NEX, observe plus labels they exchange between the VRFs running on the compute nodes you can imagine they are replacing OVS so VRF imagine is like OVS and so the information does not go to VNFs VNFs know nothing about this technology at all not necessarily that's not how it is implemented today the BGP is only in the control in the central controller and you don't have to run the BGP down to the compute so for example OpenContrail is using XMPP but it can be other protocol it doesn't have to be XMPP any other questions? what is available for public inspection I think it's here yes, I mean the implementation we have our own internal implementations we do our working on this is not yeah, you know it will be for public use because we are building the cloud based on some of these technologies I mean then you go to yes, I mean I don't know if somebody maybe somebody implemented open open source no, we have not implemented the with the VR no, I mean it kind of replaces the DVR it doesn't DVR is still based on the Linux namespaces this is not this replaces the obvious Linux bridge not in this way I don't know how the service chain will work with DVR I mean somebody would have to demonstrate it I don't know how it will because it still have the issue that DVR router is shared it's not a private router it's a shared router between different virtual network so where would you apply the policy I don't know at the exit of the network hey Maria we can use the mic here too guys just one question are you at all looking at NSH header technology now, some of the announcements have just recently come out so the difference, it's a good question the difference between this and NSH header is that this doesn't require a change in the NFV so NSH header would have to be understood by those services and this is a complicated thing I don't know if you know of any services who understand maybe there are some, but you need it everywhere second issue is that I don't know how NSH is signaled there is talk about signalling but how actually this signaled is not clear so this there is an issue that you may include a metadata in the packet it's in the header but it's better to do it with the IP header than some new header which will introduce another complication in your solution, what would you advocate to associate the VPN with the neutron router construct or associated with the network or what do you suggest advocate in your solution so what I am suggesting is that to use the fully distributed routing so they replace the Linux bridge or VS with fully distributed router not to add a router as a service but to have the routing function as a core part of the neutron it will simplify a lot of things it will help there is an effort in OpenStag which is group-based policy project and how, yes, it's great to have policy but how it's going to be implemented with default neutron it's not clear but with this kind of technology you can implement it very easily so that technology basically does for your routing switching if you want and thus network policy implementation in very clear and clean way isn't OVN trying to do that? yes, so OVN is kind of an example I guess a proof that maybe the OVS has to improve so maybe the suggestion that's what I said that maybe ok, I have to go through all the tics, ok, then maybe that's maybe that's OVN that's how OVN should look like sure I think it's normal questions, thank you