 Okay, good afternoon everyone. Welcome to the presentation. I'll talk is about nutrient-to-neutron and SDN backbone traffic engineering Short introduction. My name is Yuri Babenco. I work for Deutsche Telekom focusing on our NFV cloud I'm Sudeep Charvatour. I work for Juniper SDM engineering My name is Sudeep Kapoor. I'm a distinguished engineer at Juniper. I work with the open contrail team Okay, so let's start with a problem statement As we know service providers have established IP and PLS backbones At the same time if you look around a lot of cloud providers What they are doing is basically building dedicated inter data center one connectivity for the inter data center needs and communication between data centers Probably service providers need to consider how to utilize already existing backbone assets to Provide better connectivity and fulfill the needs of data centers distributed data centers per se have some traffic engineering needs And it would be ideal that data centers will be able to inform and request this traffic engineering resources from the one the backbone in between this results essentially in a API which can exist between data center Orchestration piece and a backbone orchestration piece on this SDN level SDN controller level this API or traffic engineering API does not exist today So basically the focus of our talk today is on this API We are going to discuss potential use cases potential benefits as well as a model how we imagine this can be done So the main key trends which we observe today in the service provider area is that first of all Obviously neutron opposite is a defective standard inside of data centers Leverage in SDN technology at the same time what we see is that there is a big trend inside of the backbones themselves Where service providers to think a lot on how to utilize existing assets existing technology and how to Make it baked basically better with new approaches coming from the SDN world And what we also sees that right now there is no direct interface which allows Data centers and the backbone in between to interact on the traffic engineering needs Which use case we can imagine if let's say this interface would exist. We can think about latency sensitive applications thinking about Evolvement or development inside of 5g Standards low latency networks. We can think about Some regulatory constraints where some traffic should not leave the borders of a particular country due to some Regulatory requirements. We can also think about guaranteed bandwidth or end-to-end connectivity If we want to think about let's say data center interconnect use cases So these are potential use cases which can emerge if let's say we would have such a kind of API between data centering backbone assets the main requirements which One could have on such an API. First of all we are Working here with two different domains first of all we have an backbone domain Which has established technology which has its own rate of innovation pace of innovation and At the same time we see a huge involvement involvement in the data centers themselves projects like neutron new developments with containers we want to have a Common set of protocols and APIs between these two domains, but the any concept which will be which we will think about Should not limit the pace of innovation in both of these areas So we would not want to let's say couple these areas tightly they should Be able to evolve on themselves Second aspect is separation of concerns obviously there are Dedicated resources and dedicated operational teams today which focus on Backporn and data center assets We think that this separate teams should preserve at this at they are today Due to different requirements different expectations and different technology which is used Neutron is defective standard inside of data center So any concept which we will discuss today should be able to first of all work with Reference implementation inside of OpenStack Neutron But should be also be flexible if you would like to use some SDM technology SDM controllers on top of that The high-level look of a service provider backporn What we see here is a Interworking of different assets. We have data centers We always think try to think about so-called back-end and front-end data centers Which help to meet our needs in service provider area So front-end data centers are more near to the customer distributed across The country whereas back-end data center are more like a classical data center providing high availability and Reliable reliable service. We also have telco assets or service provider assets like edge routers Or access being g routers broader network gateways which deliver our access services What we see here is basically the idea is to say now We want to combine these assets and think about use cases which we can build out of these different technologies So what's the model today, right? DC and the backbone networks Have decoupled control planes, right? There is a separate workflow That the the backbone providers use to provision T networks DCs have their own Methodology right so sometimes a backbone network is not aware of the real-time demands of a DC network And since they don't talk to each other any change requires, you know a triggered Set up on both sides a DC and a backbone right Backbone conducts its own capacity planning and demand forecast Right DC networks assumes quantity of service for the most part we assume fabric has infinite capacity and We try to load balance across links and when needed we add more links Backbone networks use a traffic engineering to squeeze every ounce of capacity from the network, right? So the approaches taken are different So this is just a layout slide in terms of highlighting the Management plane the control plane and the data plane So in subsequent slides when we hear when you hear control plane and data plane This is what we mean, right? So if you look at it, you know three verticals you have your DC Layer and then you have your van orchestration and one control plane and one data plane and of course the transport Right in some cases technologies like MPLS, you know play a role in both the control plane and the data plane and control And of course for label distribution Whereas in the data plane it's for packet forwarding So this is more a slide on you know, what are the roles and responsibilities, right? The back been backbone SDN controller provides visibility of the networking resources within the backbone So this is your typical network graph Your nodes links and then the attributes the traffic engineering attributes like bandwidth, you know a shared risk link groups are fate sharing Mechanisms of a given link. So these are all part of the backbone SDN controller gives you this visibility the neutron DB provides Visibility into networking resource within a DC Right, these are modeled differently today, right? Data centers generate various type of traffic Including real-time traffic delay sensitive best of her traffic We need a common language to describe this so that these Types of traffic generated by data centers are can be traffic engineered properly Right while in the in backbone networks, we you know provide Equals They're created equal on the van. We need a way to discern them so that we can engineer them properly right, so This brings to the logical model of how information is exchanged between two controllers right the dotted line represent the channel and the direction of information exchange Right, and this information exchange between the DC controller and IP MPLS Controllers by the way, I keep using backbone controller IP MPLS controller interchangeably. So they mean the same thing Through a defined API's So at the end of the day, we need to cover what do these API's need to do? They need to cover two aspects, right? So we are looking to compute a path to provide connectivity across DC centers through the backbone so we need a way To when to compute a path you need the topology information, right? That is one second thing is we need a way for controllers to request the creation of the path So that the parts can be set up in the backbone with the appropriate resources of nations that need that is needed So that you know this request can be a typical client-server model All right, so what is topology, right? So today data center topologies are too complex for for T controllers like what I mean by called the complexity is not from the full mesh of connectivity That that is available in data center The thing is the kind of the traffic engineering or the T controller does not need That level of detail to compute the path, right? So we can kind of simplify the complexity by creating an abstraction of the topology of the network There are many ways to create this abstraction, right? And what we need is we need to create the abstraction and then specify it in a predefined standardized model right, so one example we have here is you know take a You know a real DC where we have a couple of gateway routers a bunch of spines and leaves, right? So these can be abstracted to for example, you know two Gateway routers to spin, you know spine nodes and a bunch of leaf nodes with abstracted links across right these can now be specified From the DC controller in a well-defined Yang model So IPS the IP MP less controller or T controller or backbone controller in the end are just an optimization engine Right, it's a path computation engine which collects topology and computes path at the request of the clients Right, so if you follow the picture below so DC controller is exporting the abstract topology to the IP MP less a controller and The DC controllers kind of requesting a path Basically now that I have a computer path from a to z with certain attributes, you know X amount of bandwidth or avoid certain links Right, the IP MP less controller now that it has knowledge of the topology and the reachability from a to z can compute the path and says Okay, these are the this is the path that you should take Or this is the part that is provisioned to carry your traffic um MP less is it provides a toolbox that can be used right, so We have various employees and MP less has been deployed widely in in the network today We have RSVP and LDP and BGP LU for label distribution That being a toolbox MP less also allows what we call combinability, right? So you can basically You know not be constrained by one particular distribution protocol You can take you can combine a bunch of them for example LDP over RSVP or you know use You know what's called a binding sit in spring terminology to bind LSPs together disparate LSPs together Right, so MP less allows you that capability and the beauty of this is with this a scheme that we have with the with the API is The van and the DC controller working in in conjunction working together can hide all that complexity from you It can completely hide it. You don't have to worry about and now I'll hand off to Dave to so So we are looking at connecting the front end the back end where the front end can request the back end certain specific requirements from the back end network, so Why is it relevant to the open stack because we want to do it in an open open fashion, right? So the neutron and the backbone MP less controllers They must be able to exchange the information via a standardized interface rather than a Proprietary so essentially the goal is to build an API which is a vendor neutral Technology neutral. So so it's not like tied to a given specific vendor So so anybody and everybody can use it, you know, it benefits the operators It's benefits the when vendors and and everybody's winner in this case, right? so And the information exchange between the DC controller and the van controller Should be described in a standardized manner right through through this in API So one thing which is going for us is that the the foundation work is already laid out by BGP VPN So there are exist a service plug-in in neutron Which does allow to consume resources from from for the BGP VN way Information right so the next step is how do we sort of bring in the traffic engineering? Information with that right so so that's where the open stacks relevance comes into play So How do we do it? So we've looked at a few ways we we've considered working with the the BGP VPN team to see if we can add an extension to the existing API to be able to carry this additional information, which is needed by the traffic engineering and In this slide, we are sort of proposing that possibly bringing in a new service Plug-in along with the BGP VPN plug-in whereby this service plug-in will talk directly You know the traffic engineering API with the van controller So so what this does is it makes it doesn't really tie it into one specific Technology it makes it it keeps a technology agnostic if somebody wants to do it in a different fashion and Not be tied to the BGP VPN. It gives that flexibility, but we're not We don't have a specific rigid religion at this point, you know, so that's one of the things We are sort of coming back at the community to come and work with us and Tell us if you have you know, we're going to discuss you We will talk about more use cases. We're going to talk about those use cases If you have any use case where you would like to see this API Act differently. We want your feedback So we will be kicking off this activity in in a neutron where we will initiate this program and Obviously, we want a wider participation from the wider audience Here is an example of like how does this API look like so this is just as an example so for instance the data center application is requesting a Certain guaranteed bandwidth from the back end and with the minimum delay so It's as simple as that five basically five parameters are needed You know, you specify the named LSB the source address the destination where you're getting it and Guaranteed requirement required bandwidth and with of course the lowest delay, right now Like Yuri mentioned earlier in the beginning part of the presentation that you may want to pick the path based upon Geographical preference a certain traffic should not go out of the country or whatnot. So so we could add additional parameters proximity related or what not so So this is something where we can really use, you know everybody's help, you know and in a community-wide participation and So that we can make it as generic as possible So that Depending upon your use cases depending upon the operator's needs and depending upon the vendor's Implementation it could be very easily extended. So that would be our goal to build this API Okay, so now if we imagine that we have this API what what which use cases we can potentially realize with that So imagine that now data centers distributed data centers and the backbone in between can interact on with the help of this traffic engineering API and Back and data center can now basically dynamically request traffic engineer resources inside of the backbone Backbone is now able to facilitate these demands Real-time or near real-time and provide the necessary resources on the traffic engineering pass between Vacation A and location Z this locations can be to data centers, but they can also be let's say one data center on the left and a broader border network gateway on the on the right or some piece of mobile infrastructure So it shouldn't be necessarily a data center interconnect use case With with that VNF is now able to dynamically address its demands and these demands will be met by the by the Backbone in between one of the examples potential use cases which can be realized is a geographical constraints use case Where we will say some particular traffic should not leave the borders of a particular routing domain It can be a country. It can be some region whatever this now is able is possible with the help of such a Such traffic engineered API in between Other potential use case, which is obvious is low latency traffic Some application maybe 5g application or something else which requires low latency pass is now able to Dynamically request these resources inside of the backbone and backbone can facilitate this resources and provide the low latency pass For this application So it can be on the left side can be some piece of mobile infrastructure mobile gateway And on the right side is the back end a piece of mobile infrastructure running in the data center Through the API now the controllers can exchange and these resources will be provided in the data in the backbone in between Today for that probably two separate workflows are needed one inside of data center one side of backbone And this will take longer time obviously as with the help of a direct API in between we can think about further use cases like bursting or data center interconnect or storage replication and Whatever problems or needs you might have with your applications. So if you if you're building your applications out of backbone and data centers so these are potential use cases sounds amazing and now let's basically come to the conclusion and Have some final discussion. So traffic engineer API Allow us a direct interaction between data centers and backbone in between We think that traffic engineer API can be realized as a nutrient service plug-in which will be vendor neutral and will allow us to leverage nutrient per se but also to Use existing SDN controllers inside of data centers as well as emerging controllers in the backbone The way how the controllers will work in a Specific domains will not change. So neutral and will work as it works today And as the end plug-in into neutral will work as it works today and SDN controller inside of backbone will work as it works today So this does not change. We add additional capability and now can leverage both pieces of this infrastructure The realization of this API helps folks like service providers or any different company using backbone and data center to leverage and build new use cases which we previously described and and Finally this API helps VNF so applications inside of data centers to dynamically request resources Which can be then meet by by the backbone in between So if you feel like this is interesting for you We have a user path which you can join and command or add your use cases at your contact details Or you can just being any of us the email list is on the screen and We are looking for feedback for any commands any ideas any critique You might have to see if that makes sense to the community and we should look forward on that. Yes Okay, so we are at the end of the presentation and open for your questions. Thank you. So this will be obviously helpful for Data center to data center across a single service provider network, which is typically one hope on the internet I'm thinking why not scaling it to internet scale. So dealing with both the traffic engineering inside one hope but also With DNS across the internet. So if I have a private cloud and my neutron module can signal the first hop provider About my request to better SLA delay bandwidth, whatever Will be signaled across so it's a different mindset I know then what you present, but what do you think about that? Yeah, definitely This can be Operated right so you can have a model where you can have a couple of SDN control one SDN controllers talking to each other in the you know path computation model that we have today to kind of try to you know set up parts and You know connect the two service providers to provide An end to any connectivity. It is definitely possible, right, but at the end of the day It's it's about what you know what topologies how the services are so the domain of connecting to Clouds is not particularly relevant to this particular thing other than okay, how the pairing exchanges happen between those two Service providers right that is not what this guy is saying, but yes, you can extend it But that is separate, you know handshake between two controllers SDN controllers like managing those service providers Is that answer your question? The cloud Environment on one data center and the other has the neutral module. So your talk is about Neutron to neutral meaning through traffic engineering of MPLF. So, right? It's it's very specific use case of MPLS yes, yes, yes, one hope and then One data center of one cloud provider and another yes, right? Yes, not internet across Not it can be extended any open stack to any open stack. Yeah, currently currently the way we are proposing the API You're correct. That is for this for the first half now So connecting between two providers. That is something That's not something we are proposing in here, but that's a that's definitely a good use case and That would require how this information get exchanged between the two back-end Controllers to two van controllers controlled by two different service providers. So we yeah We're not thinking about that in this particular case. This what we are trying to do is we're trying to build a Sort of foundational work so that this this eventually gets into those more complex use cases Yeah, that that that's the goal I Have another question. What about the controllers do you you want this to become an IT release standout so You the controller developers are going to build it into their controllers or do you want to be based on ODL or what? No, so this the the the the API which we are proposing here is the the API between the The neutron and the SDN controller. No that SDN controller can be any So it could be ODL. It could be contrail. It could be no watch Or or or anything and that's why we want to build it in an open fashion And that's why we want to build it in his standards based so that it works for everybody And that's why we need the community's help to to come and talk to us tell us because we're gonna be possibly pushing a blueprint to get this Initiative going But we to in order to ratify it in order to get it right and and to make sure it works for everybody That's what I think the idea is also to use ITF existing model or extend the model so that this model can be basically applied by voice starts so that's That's a proposal. There was a question here So like a personal question because I've been playing a lot with with this myself How are you thinking about? dealing with when you have capabilities that you ask for and then Some of them can be obliged and some cannot and and you know like I Set up a path and I have a primary and a secondary path But I also say that it must be a maximum of five milliseconds and there must be no single point of failure Right now in many topologies You know you can't get them all yeah, so so so we we thought about that in fact We did we did have a few discussions about that One model was a knack, you know, you know, whatever you're asking for it If the back end cannot support it, it's a knack if it can support it as an act other model is We're really thinking about is that you request the response comes back for example in the earlier slides You know, give me a path between a and z the result comes back a x y and z right You could possibly want to influence or you want to say a b x y z or something like that so so those are more details, right and we are very cognizant about it and That is something we want to once we get into the details and the implementation That's where we want to take it off on the IRC's on the meeting So what what we will do is once we get this thing going we will sort of kick off weekly Discussions will kick off a meeting where you know, we would want people to participate to come in and Bring up pretty much these use cases you're talking about but the very simple very Simplestics approaches you like just to get going acneck. Okay. This is my requirement Okay, if the back end says yes, I can meet that act Otherwise it's an act something Just are you looking for okay? You know if this is not available then do this and then do this like for example a bandwidth not You know bandits not available then what's the next best that I should try to satisfy Those kind of requirements Yes, yes Yeah, so there has to be yeah, so the API can provide those options for example some in delay is one such characteristic It has to be so you can specify delay bound, you know and then computer partner. That's not available then okay We can say okay, that's not available. What would you do? Those kind of options. That's what they were saying and it's something that we can build on Hi Thanks, I Really think this idea does matter But I have not kind of a question, but maybe kind of a comment So if you can back to your slides, maybe The which one? Yeah, you can maybe yeah as This slide is okay. I think so considering you you you you said about the design you You still want All domains, I mean that the center domain and one domain will be completely independent so So I would say I like this idea if T API will be how to say will be between Neat around and Data center is the end controller, but one is the end controller and data center is the end controller Why? I'm not sure if T API will be It's good It's good choice. Maybe BGP or BGP less will be much more scalable in this case because This protocol probably I think it's major enough Provide such kind of Talking about BGP LS Yes, okay, so BGP LS today we basically do topology There's another element to this but you can implement probably boss. Yes, you can use boss I mean you can use BGP and BGP LS Yes, though one other aspect is to request service as well It's in addition to topology. You also want to be able to request a certain resource from your back That BGP LS does not provide Yes That's the point who is requesting what from whom so is a data center requesting resources or is it back Born telling to data center. That's what I have you can use either this boss or that boss select whatever you want for you Right now for your application. This is kind of also open question. We discussed so there is no valid answer We should we should see what makes sense boss is potentially Thinkable like who's providing what to him? So what one of my suggestion would be the etherpad which we posted here I Just created it. There's nothing much really. It's it's it's an entry point so that we want to we want your feedback Go go to that etherpad. It's a very simple. Just remember the last part right and and just go there and Put your thoughts, you know, whatever because once we Publish a blueprint that we want to possibly incorporate as many use cases You know, we want to cover as broadly as possible. Yeah Well, we're probably the question of the first gentleman that ask about the generic To open a stock private open-stack Data centers My question is assuming that today 80% of the world is not SDN one controller based And assuming that 80% of the world is basically as the gentleman described Internet based data center private data centers Do you have in mind any proposal or can you please elaborate about a proposal where These idea can connect to standard routers know as the end controllers but standard routers and Make the standard routers to make the decision From you know, the one perspective to create a traffic engineering terms and at the same way to terminate that traffic into private data centers using Open-stack, can you elaborate about that idea? So see their data service So so what what we are saying in this API T API is Twofold one is where a controller can talk to a controller, right? But the way we are proposing the implementation. It's a service plugin Once you have a service plug in back end is anything, you know, it can talk to the router It can talk to yet another controller which talks to the van controller The problem is that the router doesn't have an API that talks to you know service plug. That's that's what I'm referring to Yeah, so you're you'll have to build some kind of a rapper in there So I mean I have not thought through that use case But but this is it nonetheless is a good 80% of the juice cases. Well That's 80% of the world. I mean the word is 80% non as the M base controllers It's based on routers which is right sure. So that probably depends on your perspective. So if you if you are Thinking about use cases where you need to combine both of these assets Then you are on that use case if you are a user of a private data center, then you're right So then probably this use case will not work because you have no instance to which you can talk Which have the overview of the whole assets in in between your data centers So basically the premise would be then to have a visibility method for these Proposal as a va as a valid point of entry you need a visibility way Choose to oversee the network right that that is a bottom line. Yes, we are computing parts. So we need to apology Okay, any other questions, okay We have four minutes I wonder about the user of the service so I request a service for what so neutron site Network based VM based you provide that service for the entire network or the network port Yeah, so that that's a good question. So so this is yeah, so now you're really hitting the right point. So yeah, so You you have a weak so So two ways one is with the way yesterday only we were talking about this So I think what we are thinking is that you have an essentially a network So say you have a blue network and red network between two data centers, which is running to you want to say my blue network is a Low priority network. My red network is a high priority network. So you want to have the ability to To change the characteristics of a network through this API, right? So that's the way we are looking at it, right? So so you have a VM which is running these services Possibly the VM is running connect connected to both networks or two different servers. It doesn't matter, right? But but right now the thought process is that's the way we are looking at it, right? So you're extending the network which is spanning across the two data centers And you want to be able to influence The the characteristics or the performance of the network so to speak Yeah, so yeah That's the way we're thinking So another way to look at this thing is also the API that you provide is kind of the underlying So you you set up your path. What you're asking for is how do you steer traffic into that path? Okay, that's the service mapping part Interal to new yeah, so we just give you the underlay and then you you know put the only yeah Okay, thanks for great questions. Have a nice day. Thank you