 Good morning Good morning So let's get started Today session is a novel service modeling framework for in every networking So we'll be presented by three of us and I'm being who for AT&T I'm Sukdev Kapoor Distinguished Engineer from Jennifer Networks. I work as a part of the open contrail Program And my name is Gail Kunso. I'm with Ericsson, and I'm very active in gluon and in opnfv Thank you. So Dave and Gail. So today we will talk about the service brief modeling framework. So we'll start with the The new pattern shift in the empathy networking What are the business goals and what is the gluon basically the very high-level architecture and and followed by the current presentation and background communication that I will be introduced by Georg and how to create a new API and We will talk about the also the gluon roadmap and integration plan with neutron and we'll talk about the today So let's start with the paradigm shift so in the traditional data center networking and All the traffic is usually not a bottleneck and the traffic will be terminated at the application at SAS side and so under these circumstances the interoperability and stability is always the key to the success of the SAS applications in the traditional cloud and so you want to achieve the portability of different VNF's and SAS applications to be running on different types of clouds and In the empathy networking and things a little bit different and the traffic is usually high throughput and Usually and will be passed through instead of the termination at SAS for example the virtual switches virtual routers the virtual firewalls and all the traffic's are passed through those VNF's and Be switched or be routed to a different the destinations and we need the high performance of the traffic to be able to pass through those VNF's and so under these circumstances and as a more important thing is the quick development and Accelerate deployment of new networking services and to achieve the time to market and to improve the visibility so that's the paradigm and the shift in the empathy networking demand and Also in the service from service provider perspective, right? and we are dealing with all also different types of legacy networks and So let's someone said that this there are 99 protocols to deal with instead of the only TCP And that should be to be dealt with in the data center networking. So which means that we need to have More structured the multipresing controllers to be able to able to manage the different backends and different legacy demands and So in this case supporting the multipresing controller simultaneously on the back end is very important in terms of the empathy networking perspective and actually it's more than the Networking in in this top of stage for example in the ONS about a month ago and the Google had a Keynote speech right in Google's the data center networking They even have the the hierarchy code SDN controller structure They have a global SDN controller. They also have a local SDN controllers and in Google's the espresso Global networks so that was presented by Google's keynote in the ONS so all this means that the partner shifts and we needed to have Different the requirement or the different use case to be supported in the empathy networking for example We need to support the multiple Networking back ends simultaneously. We all needed also to support a quick development and accelerated deployment of new network service API's and I need to make these new API's to be agnostic of those different back ends for example back ends could be an Open daylight could be on us could be country or could be any other SDN controllers local SDN controllers and to control those different the switches and So done all in all so we need the the networking service on demand which basically means that and we need to Support those unknown unknowns in the near future and for those new innovations that happening and in the networking empathy networking space so the business goes Here is includes The working solutions that needs to be integrated with open stack because open stack has a lot of different deployments and then commercial in the market and We need to able to use both the new chum and the new end of the networking service together so that they can we can support both the happy use of the new chung and in the traditional data center and also we need to serve our the new users in the new segment of new and empathy networking So the solution is called a glue on and it's a glue is a blue model driven intensive of framework for an NFA networking services and Here is the very high overview of the glue on the architecture and from the blue architecture and you can see There are basically two major components. So here we We thought two four components, but two major categories. Why is blue on the blue on framework? So basically is a port arbiter that can maintain the list of the mappings of ports and the different network backends and also forward support related the operations requests to the correct Backends as any of the controllers and the second the types of the components we call the proton includes proton server it's basically from service API server and host multiple API simultaneously, which means that it can support multiple the NFA networking services and On top of the proton server there are set of protons Each proton is a set of APIs of a particular NFA networking services basically the standard northbound interface and for for example the L3 VPN for the point-to-point networks for the service function or for any atop of the services that's immodeled and supported by those APIs and There is also component in the southbound is called shim layer so basically shim layer is adapter of the northbound interface of proton and with the actual the backends of those testing controllers for example the shim layer for the open daylight shim layer for The owner simulators for country and for shim to for any other testing controllers as well so that's the the basic architecture about the how gluon is designed and currently the gluon is implemented and With new chung so in the infant implementation perspective and as a gluon the framework is Implemented as the co-plugging or extended a co-plugging of new chung so basically extends or subclasses new chung's a co-plugging and with the logic of the functions and to differentiate the port that's supported by traditional new chung or the port supported by the protons and based on the port from proton or port from a new chung and it routes the port request to either proton server and to be handled by back-end testing controllers or it will just gives to a mo2 agent and L2 agent will work with that the other Neutral backends for example in terms of control it has the country mechanics driver and works with the country as the control under this architecture and The more important part is we call the yaml file. So yaml itself is basically the model of the network services So for example, we may have the yaml for the L3 VPN we may have the yaml for the SFC yaml for the point-to-point or yaml of any other services and The proton server will use component called particle generator and the particle generator will read those yaml files and create the That that the API endpoints the rest for API endpoints after the route the of use your IO and also the schema in the database Yep, so so that's basically what sits here And so that's an example of the yaml file and watch the philosophy and I mean design time We design the module of the of those APIs in yaml and on the runtime your particle generator to generate the rest for API endpoints the database schemas and back-end the Synchronization method is cetera cetera in order to make them to work together So gilk will give the more and detailed introduction about the how the yaml look like and how to generate those APIs Exactly, so So now you know basically why we want to have a model driven approach in glue on right just to recap that we want to have something that allows you to very quickly and then kind of flexibly Do design work and coming up with new networking APIs, right? So it's really about this design time thing You need to just think about what kind of API you want to have you write it down And then you don't need to take care about the runtime Aspects like creating the API that the rest API database schemas and so on and so forth So that's the overarching Reason why we want to have a model driven approach But if you then look at the details really like you start asking Okay, just defining a model probably doesn't cut it really what are the properties of the modeling approach really? What what should be the semantics of our modeling approach and what are the basic building blocks? You can use to build new APIs So this is basically we here I'd like to briefly cover two of the design goals that we have First one being okay. We want to build a model driven Framework that provides the flexibility we need to cover well were to express Whatever networking APIs might come up in the future. We don't know that yet So it's really important that the model framework itself is Flexible enough and that means in turn that we need to really minimize the amount of implicit and explicit assumptions that exists and dependencies between API objects, so we need maximum flexibility here basically and then we need a couple of tools Like a set of basic data types and we need to define the semantics and how to compose those Objects basically to in the end eventually have an ideally hierarchical object model That which you can then use to to create new APIs and leveraging well best practices of object-oriented design Of course, so that's these are design goals covering the API Modeling approach as such, but we also need to take care about need to take care of how do we Make sure that multiple different APIs can coexist next to each other and Well at runtime in the same data center in the same network and even kind of bound to the same VM For instance, so I'll talk about those on the next couple of slides I'm Specifically about the API object model. They are two different kinds of objects base objects and API objects a base object is really just a Set of attributes That or that groups a set of attributes so to say And it is meant to be used In composing these and building more complex objects out of this So our particle generator will not instantiate so to say a base object It will not be something that you can modify later on in the API. It that sense It's more really like an something like an abstract class. It's something you built something else from On the other hand the API objects are then the real objects which the particle generator kind of creates We will have URL Endpoints for each of those API objects each API object is represented It's a table in our database. So these are more the final the concrete objects basically In addition, we have an inheritance scheme So one base object can extend another base object and API objects can also extend base objects, so it's Nothing entirely new I guess, but that's these are the basic building blocks We want to have in order to come up with our with an API and Then as I mentioned before for every API object, there will be at runtime concrete In instantiations or let's say we will create and expose the rest API for every API object and that will that looks like this Well, we have when you look at the URL, we have a static part Which is just called proton then the name of the API or the model our case here It's just example API, but it could be something like layer 3 VPN function service function chaining And then you have the separate API objects and the system automatically creates Five common very basic operations that you can immediately right after bringing up the system And well after the system has read the model can apply to these API endpoints, of course It's it's creation modification. You can list all of them You can list specific objects and you can delete specific objects all without Needing having to write a single line of code except for the model of course Of course the the body of the requests you send to these APIs is just Jason Nothing particularly here. I'm one thing to mention. I don't know I've been mentioned that before Also part of the particle generator is an API client, so to say so once you haven't a model You can use an API client seal I client to immediately start Sending requests towards this automatically created API So this was just a basic overview about what the the API building blocks look like base objects and API objects The next question is how can we make sure that given a certain? API modeling framework or best practices that other users of this system come up with APIs that actually can coexist and and how can we make sure that different networking services can be bound to the same? VM for instance, so in order to facilitate that and to give a guideline to kind of users who want to create new APIs we have defined for Base objects a base port Which represents basically a v-nick of a VM so it has properties such as Mac address MTU size Admin state things like that We do have base interfaces which kind of are linked to base ports those are layer 2 Segmentation devices that allows you to segment basically traffic on for instance based on VLAN IDs or VXLAN VNIs Then we do have on the right-hand side base service that should be the basic Entity that a creator of a new API should think about okay I Want to create new services right and all the properties of a particular service should be somehow modeled as part of such a Service object and then that is kind of a fundamental Thing here. We define a binding object Which is meant to bind a service to an interface and This allows us at runtime for instance to well by creating or deleting this binding to Obviously bind a certain service layer 2 layer 3 whatever to a given interface on the other side Yeah, so that for instance allows things like you keep the VM up and running and you can for instance exchange Just the kind of networking services which are bound to it by just kind of removing an existing service binding and creating a new one to towards another service So this is the the basic idea that you should or that be to give as a best practice and Talking about practice. We do have An existing layer 3 VPN model right now that we can use to create and configure layer 3 VPNs And it kind of maps like this to the model to the basic model that I just presented before So we have a concrete port which just extends the base port and a concrete interface Which extends the base interface nothing fancy here really But then you can see on the right hand side There's a VPN service which matches or maps to a VPN instance So to say so the properties or attributes of this object are things like IPv4 targets IPv6 targets route distinguishes a very service or VPN specific properties And then in between there is again this binding and the binding Attributes or things like the IP address subnet and gateway because those are specific not to a service really But specific to the interface when you bind a Particular interface to one service you need to have an IP for instance in that particular case Yeah, so this should give a rough overview of how we kind of map our abstract model to a concrete model that we have running right now At the same time We are also working on further models going into both extremes here in terms of complexity We have a very simple point-to-point model and the purpose of this one It's rather a toy model But the purpose of this one is to show that our modeling API is capable of coming up with a kind of a new service That does not in a new service API that does not exist for instance in neutron right now That's a little bit of problem with the layer 3 VPN API because we that exists in neutral right now like this Something like this that does not exist It's really just a simple service that creates a pipe between two ports and whatever comes in one side goes out Comes out at the other side So we again have poured an interface nothing special here, but then again because it's just a pipe We don't need any addressing or things like that So the service basically just defines a protocol which can be Something that describes the encoding of the data in this pipe and then the point-to-point binding really just specifies For instance in this example the bandwidth so by using this very simple API You can create pipes going from one VM to another with a specific Bandwidth just to show what another a potential API might look like and then on the opposite direction in terms of complexity We are also looking into mapping and existing service function chaining model onto our gluon modeling language and right now this ITF service function training model is defined in young and and We have created a very simplified Gluon based version of that that's very important for us to validate that the modeling approach We have chosen and the language properties. We have chosen is kind of complex not complex is rich enough to model Or to cover real-world complex models So the model right now is very simplified. We're still learning. We need to see how that evolves over time But well, we're making progress on that front That basically covers my part and I was sick death. We'll talk about the future Yes, so Whatever you heard What we want to do is we want to bring it as the integral part of neutron so we want to take gluon and make it as a Part of neutron as one of the stadium Projects right so therefore one of the goals is to bring all these APIs which George just mentioned as become an extension to To the neutron right in order to do that one of the thoughts which we have is we're gonna make a gluon as a As a new service plug-in with a neutron community So thereby we can expose all these APIs which we just talked about through the main neutron API so so Essentially it will look like neutron gluon create an object or gluon neutron glue on Connect or bind to endpoints and what not so With that in mind another thing which Been mentioned earlier the idea is to have all the existing services which are available in neutron networking and be able to extend the Services on on top of that right so for instance If you if you have certain services deployed already Right and now you want to bring in a new NFV function through a yaml file. So what you will do is you will just create a yaml file and and and that that gets fed into the the new proton Which then gets exposed into? the API and and now you can start to use the new API through this extension So that's that's our ultimate goal. That's where we are heading right and by doing the service plug-in We can fully integrate it with the rest of the neutron services And we intend to leverage all of our existing networking projects which exist today like networking ODL the networking service function chain, you know VPN service load balancer what not so this will essentially add on to that and One of the project. I don't know whether you guys heard it or not, but I'm part of open contrail Program and we announced a Networking open contrail yesterday. That's a new project which is being kicked off whereby now independently open contrail Becomes a part of neutron so glue on is an excellent opportunity to leverage on that So so all of these seamlessly fit in together to bring in one unified API and thereby So for instance when George was talking about the base object, right? So the glue on will have base objects and and the protons will come in and will add new attributes And all of a sudden we can create different models within the base object So that that's where we are going and we are intending to utilize All of our existing services and bring in add-ons. What does it do? It gives a huge win for the operators You can you can bring in new NFV services in your existing deployments without requiring any forklifts right and And you can run multiple SDN controls today for instance neutron Or in open stack itself you couldn't potentially run two monolithic plugins simultaneously Right, so this would allow that so now we can bring all of them together and And the new architecture would look something like this so you have Neutron core plugins, so we're looking at it either we will use the core plugin as is or we will possibly Bring an extension to it and call it a glue on core plugin, right and then you have other all neutron extension APIs which are load balancer Service function chaining routing or whatever all all of those APIs exist and on top of that these new glue on APIs get attached, right and Within glue on service plugin sits and a proton server That's that's exactly what Ben and George sort of talked about the details, right? so they get packaged as a part of this and In the YAML files are the ones which get fed into the the proton server and they become part of Part of the APIs right and and then On the back end you have multiple SDN controls, so now you can run Open-contrail or ODL or onus or whatever you have and for instance if you have in a data center Or multiple data center for that matter which are being managed by this open stack instance If they are managing different clusters different parts of the data center offering different services You can bring them together for example to give you an example What I was able to do was by by utilizing these principles So I used two Open-contrail instances independently They're both hosting services and I was able to create a service chain across two controllers And be able to load balance that something doesn't exist today Or if you if you need to accomplish that you will have to do a lot of manual configuration So you'll be pulling a lot of teeth to make that work So the goal of this program is to make it completely seamless Through the one common extension to the Neutron API. So that's where we are heading with this So this is food for thought. This is this is the way we are thinking about Because some of the use cases sort of are known. We kind of know some of the use cases we don't know because the The industry is all evolving so fast and and so many new use cases keep coming up So we don't want to be keep going and keep creating new extensions keep bringing up More and more service plugins. So the the intent here is what we will do is In addition to existing resources which Neutron has which is the basic Resources such as ports networks and subnets in addition we will define two glue on resources Glue on endpoint and and glue on net function. So the endpoint is essentially a single point Which you can associate any property with it This is this is what George was talking earlier So you have a base object and then you can add additional attribute to it. So for instance if you have an existing Neutron network, which is running and you want to add an additional property Be it a Gateway service so you can define glue an endpoint to represent a Gateway object and and you come in and and you say Glue an endpoint Connect with the Neutron network and that's it so now what you are able to achieve is you have an existing network all the ports Which are running on on that network now? They can bind to this external gateway. So that would be one of the examples, right and whereas Net function is a little more a complex Surface endpoint so where where it actually defines the service so you can take Essentially you can connect a glue on endpoint a glue on endpoint to create an point-to-point network That's something which doesn't really exist today in in Neutron if you wanted to if you wanted not to use Neutron network and create a port and Create another port and wanted to connect them So with this we can sort of facilitate that, right? And similarly but that the with the net function Same thing so you can connect essentially glue on endpoint to endpoint you can connect a glue on endpoint to Existing networks or subnets so that way you're extending From one port from one endpoint to one port or multiple ports or to subnets or whatever you can So you can seamlessly keep extending your services. That's the thought process. We're going on This is what we're chewing on and and we Because this is a community based program. This is not like something locked in we would like you to Think of use cases which you have come in participate become a part of the team become a core contribute and and and and sort of dictate the future of of this program and help us Identify more use cases so that we can address them and and bring them as additional protons and extend over the hard process so that that's in general where we are Taking this forward and with this I believe I'm done with what I wanted to Communicate so now we're going to open up for questions so we have Roughly seven minutes To answer questions Please use these Microphone Good morning gentlemen Scott Fulton for the new stack and data center knowledge One use case that came to mind as you were talking about gluing together Endpoints to endpoints to endpoints it occurred to me that some of the larger data center providers in this country organizations that Rent space effectively digital realty equinics. They are in the business of Providing connectivity as one of their key services and recently they've been providing Extended connectivity to public clouds, but could something like what you're proposing with glue on Give them a new value add Which would enable an easier type of multiple connectivity? not just to individual public clouds like Amazon, but to to To other affiliate data centers as I say that regardless of what network architecture we're using I know I know glue on may have been intended to be built on top of open stack But when you say that that glue on to connect in points to To other glue on endpoints that neutron may not be involved here at all Perhaps as an opportunity for creating virtual links like like Lincoln logs and tinker toys That that could eventually connect anything to anything am I on the right track or something well one one think of it When I was explaining I didn't mention so these endpoints have properties where you can You can call a local endpoint and a remote endpoint you can specify that and when you When you specify a remote endpoint you will specify the properties as to how to reach that endpoint So this is one of one of the use case of which I'm thinking is for instance DCI You know interconnect between between two two data centers you could potentially Create a glue on endpoint which is which is a local which it which is connected to potentially Networks or a gateway which is or a router or or whatever object which is within the new front And and now you can define another object which represents which is not managed by local Open stack instance, but is it lives somewhere outside either by with in another data center Which is managed by another open stack instance or somebody else as long as the properties By which that data the second data center can be reached You will define that as an endpoint a remote endpoint and and next thing you will do is you will say bind these two Endpoints and that's how you can achieve that so and you can create multiple of these Yeah, so I think I want to add one important point to this gentleman's question is that the glue one is designed as independent of any other the beam and so this gentleman's question was about the Forget about the open stack whether or not a good one can support the point-to-point the end of points connections between different It had a sentence regardless of the underlying the network architecture and my answer is simply is yes It's perfectly fit no matter what the underlying the Network controller it is and but the design itself is independent And here you can logically you can see each of those testing controller and the points could be the controller of Each data center and that controller manages the internal data center networking within the data center While it and data plan will connects all those data center together with the glue ones the architecture Thank you. Thank you for the very good question Any other question? I guess we're done Okay. Thank you. Yeah. Thank you. Thank you. Thank you for time