 on it. So I think I'm going to send a note to everybody that we are meeting now but I don't think many people will join but that's okay. At least on the agenda front I was chatting with Nikolai and also with Prem Fred. So at least one topic seems like needs immediate attention especially the ENSM. The rationale being like we're going to have panels and talks in ONS right. So certainly the question will come for example how does the NSM work with ONAP. At a high level we can say it's all complimentary right but then hey a precise answer will always be so much I mean spot on like basically hey here we go here is a document on the top our thought process how these work together and same but I mean Prem is also driving the ODL demo that will also help a lot instead of you know putting all the pieces together. At least my thought process we should just end. Nikolai pointed that there is actually a nice document already in progress inter-domain NSM it's just that you have to you know tie several pieces together and I thought that could be a good topic for today and if there is anything else also we should look at. Yeah I do agree. I was trying to find the exact spec where we found it. Not a related thing but Frederick and others. So I have shared with the permission with also should I share it at a Google group level what are your thoughts Frederick? Ability to manage the sharing make changes. Let's hold off on that just for the moment. Okay so at least I've given to a bunch of folks for example Ed, Jeff, Ramke and I can probably add Nikolai also to this list. Thank you. Yeah my main concern it's it's not that I don't trust anyone in the community and I think anyone in the community at this point will be okay but there's two things I think one of them is the community as it grows much more than the chances of running across someone who's malicious is higher and also I also worry about Vox trying to spam over calendar. Agree, agree. So I think at least for now I'll just limit it to a few of us. So Nikolai I'll add your VMware ID. Yeah please. So it should be let me find it from my. I have put my name and email on today's meet minutes. I'll take it from you. Okay so one thing one question Ramke is should we park it or should we have it later because this would be beneficial to all our discussion? No so Prem let's at least kick it off. I'm not sure we'll be able to conclude but at least have a thought process there and basically say here is a draft spec for review but at least I mean all the key folks are here. So I mean I know Ed is missing but at least let's get it moving. I mean I talked to Nikolai like just 12 hours before I mean he pointed me to a nice spec in progress I put on the chat for everybody so at least we can make some progress I thought. I mean I'm not trying to say we have to conclude here but at least a thought process. Okay are we how I would say I mean is everyone here at least have some idea of what's written in this spec or do we want to go through it what would be the. I think we can solve it all. Yeah go ahead. Yeah yeah makes sense Nikolai I think if you can I mean you are the one of the key contributors right if you can please give us a overview where we are I think you have the link in the chat window share that and then we can quickly take it from there. Yeah let me see if I can share this. No you are in the yeah yeah it's right in the chat itself you can double click and share. I guess you see my screen already. Yes. Okay so this this was this was started by a couple of okay maybe maybe two months ago I don't remember but it's quite old and it will mean people are very active in it in the beginning but then it kind of gets stale but I do agree with Ramki that we need to kick off something and this use case call sounds like like kind of a good place to start this discussion discussing something. I don't know if we don't have any specific procedure of how we process through the specs because we have quite a bunch of them in our issues already and not really removing this but I think that this one is really interesting and important so this for a starter this actually tries to discuss the inter-domain network service mesh so it's not really about ENSM but I think that it lays the ground and if there's some conclusion and agreement on this and even implementation we'll have somewhere to step and to move forward for ENSM so I think that it's important to understand and finalize this first before we go further. So I don't know it starts with a quick reminder of what it how it looks like so essentially in this picture we say that the current registry holds network services, network service endpoints, network service managers. This is the way that we keep this is actually our central storage, central information storage for a single cluster. Now of course when you go to I'm not sure what was discussed here but essentially let me see where so when you go when you do multi cluster the proposal that was introduced here is to use some so it's it's on DNS here I think that it should be more or less some form of service name resolving not really specified as DNS but let's say that for the purposes of just having some concrete example of what it might be let's use DNS we have some discussions going on here on the side so this was already or no no already discussed more or less so the the overall idea is that the services have the the client would be asking for a network service I see here that the center's kind of gives a very good explanation with the name of a form secureinternetconnectivity.example.com and then the the idea is to be able to resolve the services cross clusters this way mapping service to a domain name and be able to I think that it was proposing to use service records SRV records to discover services yeah SRV records for any services to that domain um so um I don't know what more I mean I I didn't read that recently but that's at least more or less the the idea here how I can zoom this but um I don't know a prem do you do you do do you have any I would say comments on your site in terms of I know that you are already implementing something along the lines of connecting to different worlds or at least two different network service domains whatever that form is but it's two kinds of separation yeah so let me probably give a quick update on what is happening um so our intent is essentially to link open delet with network service mesh yeah and as part of it there are two parts to it one is open delet invoking nsm services and the other one is nsm treating open delet to be ensm and invoking the services okay we are done with the first thing which means what you have done is in case of open delet there is a there is a model called json rpc which essentially invokes you to host any application outside open delet and invoke the rpcs that are present in that okay so what we have done is we have basically developed gn sorry g rpcs tub and and then we define these services as yang model the moment you feed any yang model automatically those rpcs will be visible with an open delet right okay so now what we have done is to start with we have taken the icmp responder and then we have looked at what is the rpc calls that are present in icmp responder and then we have defined it as nyang which means in open delet you will see this as rpc endpoints we invoked it and then a basic thing is working which means icmp responder in a way got a few things but we are we are just still away from having it to call it as end to end right now moving on to that of the other part which is essentially invoking open delet or treating open delet as ensm i had few questions so the first question is for example when we talk about ensm you can essentially treat or develop a grpc shim layer around any of it it can be a physical device it can be odial it can be anything right and then invoke those services because that's going to be for example that proxy would essentially register with that of the it can be dns or it can be the kubernetes registry and then it can essentially or an client can invoke this or an nsm or nsc can invoke this calls right but the problem is we i mean what i foresee is there may not be any data plane related stitching that would happen because they are in different world or different domains so what i foresee is with ensm i foresee only the control plane integration and not the data plane integration that is one second thing is even if it's going to be data plane we would need to probably look at a vlan to vxlan type of mapping to achieve it is what i i was i was thinking but i wanted to validate with you and the rest of the team so maybe probably it's worth breaking down the flow so um so let's finish the control and regarding data i know like a seamless interconnection may not be possible but even in typical deployments today um inter vendor multi vendor scenarios typically the starting point is always a vlan i think it's not a bad start at all i mean even if we can accomplish that vlan interconnect it's i think fantastic that's at least my read on the data plane yeah but i think vlan also right we have to look at how the stitching would happen uh it's i mean the uh view i have is essentially a very very like a 10000 feet view and uh we we may need to when we get into the uh uh nuts and bolts we need to see what whether it's possible or not so that is where i'm i'm struggling to connect the dots yeah so uh if you uh do you um so do you have this sort of written up by nishan supreme what you're what you're describing um no not to the details i have whatever explained i just have slides that shows how slides are fine if you have slides that's also we'll just go through the slides not problem and i was i think nicole has it yeah this is the same thing i was yeah yeah i was trying to to to find something that he illustrates our discussion yeah so i've added odl here in my slide and then i put it as a g r p c that's the only difference okay yeah let's talk about i think yeah we'll we'll yeah yeah the hey guys funny here um in the absence of odl or any other sdm control what is the role of ensm is it a component that runs on every node of Kubernetes so anything external to nsm it can be a physical device also we treat it as a ensm so that it can talk i see supreme it's basically an agent running on the individual element that is probably actually achieving the data plane or control plane so a good way to think about it is um instead of thinking of it in terms of ensm versus pnsm or or so on like yeah we understand that you have to look at it from the high level view of the protocol yeah so what it does is when you run the uh the protocol there's two there's there's two main ones in this scenario to think about so one of them is the client like a kubernetes style client to a to the network service manager and that goes typically that's called the nsc and uh and then actually it's three api so we have so we have that one the second one is the nsm manager to nsm manager and the third one is the nsm manager to the network service endpoint which uh reuses which should reuse or is very close to the same api as the nsc to nsm just in the opposite direction so when you start looking at these these two main classes these two main apis then um the nsc nsc to nsm is primarily concerned about how do you get a local mechanism into it and what network service are you are do you want to work with in some some labels uh and so when you start to deal with things that are outside of kubernetes that nsc uh label doesn't make as much sense and so in that scenario that's it's begin that's when you can start to look at things like um uh like when you look at the nsm to nsm so use a case of like open daylight as an example so suppose open daylight were to expose a the n the network service uh manager apis it's nsm to nsm apis then uh we call it an ensm just to give it a name for people as a as a pattern but that protocol is the same as if it was like a kubernetes nsm talking to another kubernetes nsm so there's there's no difference in that style the the difference is that when the odl-based or sdn-based nsm receives that particular request there's no network service endpoint for it to reach out to it is the sdn and simultaneously if it needs to make connection out at least begin a server it doesn't begin with an nsc it begins with the with the sdn itself making a determination to invoke the nsm-based apis and whatever mechanism it sees fit so so we talk about ensm we're typically talking about something that is non-kubernetes related that exposes the nsm api the network service manager nsm nsm api yep go to but there's not there's there's no after that it's to us it's uh it's a it's mostly a black box at that point like you've implemented the apis we've sent things to you you send things to us and the contract is fulfilled does that make sense yeah perfect fred fred thanks for the detail explanation yeah i i get it it's it's like a nsm wrapper but the internal implementation is is really up to that particular implementation a good example was odl yeah it makes sense so just on terminology maybe is it probably better called like a proxy like it's basically proxying or you know that function right so it's not like an exact nsm implementation like where you have the nsm on each node but more of a proxy where you connect you which abstracts the proxy would be this one i don't know if you're seeing i should be sharing i guess yeah yes you're sharing the so essentially the proxy it talks i mean if we if we get the this nice explanation by fred uh about the api so essentially the proxy talks the nsm api both sides if you see this while the ensm talks nsm api only on one side on the other side it talks something specific to the sdn hardware whatever you're doing there so yeah okay the the pnsm is uh is a more advanced patterns so these are all about patterns and so in this scenario you see there's no data plane so it's actually making a call out and uh that whatever nsm 2 is data plane is what gets provided but so why have the pnsm the pnsm is able to augment the the nsm to nsm calls and inject other things and it also can be something that you can centralize uh decisions so instead of using the distributed nsm to nsm uh patterns you can stick a pnsm there to do something that is uh that is more centralized so for example if you want to do something like um you want it to pick a route based upon uh some status of your sdn and uh and instead of trying to do that from a distributed sense where each system has little information uh you could if you could have the pnsm's job be to to select the route for you because it's been given a lot more information to do so it's rather than try to rather than try to share that information to every single nsm that's out there so there's different patterns that arise from this scenario where the pnsm doesn't actually provide the data plane itself but is able to augment the request or the data plane in some way yeah some form of centralized resources that can play a bigger role and and and fred is it accurate to say that nsm is more like a gateway nsm where it is doing a gateway function between the nsm side of the world and the non-nsm side of the world well we'll say it's both nsm uh side but we'll say that it's kubernetes to like non-kubernetes or you could even have like non-kubernetes or non-kubernetes like your the odl example i gave could be going to kina right um yeah i think i think gateway is also a very reasonable description or i mean basically depending on the audience many people understand the term gateway well or people often other technology could be external controller it depends on the audience i think yeah we can explain it but i think i think what the function is well said like you know uh proxy is like advanced functionality whereas this is more of a you know sort of a translation right right yeah i think gateway is a good a good term for it for for most for most uses um i i get a little bit concerned that that there might be some non-gateways style functionality but but i think it i think this is a good term for this like it helps people understand uh yeah maybe we should bring this up on the documentation call because you know people there are very very you know specific about picking the exact words so yeah because i'm passionate about it yeah yeah it's a good yeah in the area where we're a little bit hung up and i don't know if it's an issue or not we can leave it to them to decide is like can the gateway make a call out and like call out through other things so that's what i'm wondering like is a gateway to me feels like it implies a bunch of inbound connections that it terminates but what if it's the one making all the connections outbound like it is a silly gateway and maybe it is uh i actually don't know yeah because it's absolutely valid that you have your clients here on this side and they reach out to services which are registered in the kubernetes i mean this is exactly kubernetes might be my uh my firewall cluster yeah firewalls so um so this is very good so um regarding implementation frame uh i'm sorry i just made this report for a few minutes i'll come back give me yeah um i can probably answer that question and then i'll uh take it um so with respect to implementation uh we are planning to demo it uh next week in oneness uh so it's work in progress every yeah okay so any uh on this like we'd be we'd be doing bow i mean so basically the nsm no it wouldn't be nsm it's just the reverse path which is essentially odl invoking and then we will after that uh we'll probably start working on the nsm yeah but but essentially uh if you are talking some form of nsm api which it seems like you do maybe we can call it like mini or micro version alpha i don't know something yeah in fact uh that brings another important discussion what do we call when these external entities wants to call uh for example i don't i don't think it that should be a problem because it's a client at the end of the day they can always you know yeah that is not a problem yeah yeah i thought it was uh uh frame sorry i somewhat don't understand the reverse how will the reverse even work i'm a little confused what does the reverse okay let me tell you explain it okay so for a moment in this diagram just assume it's open delet or some sdn controller okay okay now nsm let's assume that it is wanting to uh it's exposing a firewall here here right uh this is the firewall right the part that wants to firewall so uh what what uh uh what open delt wants to use it is it wants to invoke this firewall right uh so which means uh what open delet would essentially need is it needs to access the api endpoints yeah right uh so what it'll do is it'll essentially uh uh for now it is static in nature because what happens is this particular uh uh uh firewall would have exposed those endpoints uh uh in uh and what we do is we take the proto uh uh uh information and then create a yang file and then uh host it along with open delet right uh so the user of open delet would essentially browse through he would see this uh as an in as a rpc endpoint he would invoke it and then it will uh land on uh on the uh nsm and then the whole uh call flow would get invoked but you you don't talk the kubernetes api for requesting for sirdis you have this more or less statically defined somewhere that's right that's right yeah so we have some some lightweight version of this part here absolutely absolutely so that is more like we basically translate all the endpoints to a yang model okay okay okay right so in fact i was having a discussion with uh frederick last week in fact what we can do is we can make it as open config yang so that uh uh in a way on the long term we can always say that nsm is compliant with open config yang and then start implementing those so if i understand right frame all that you don't support is the dynamism so basically the whole uh setup right so for example the service setup nsm one two three legs from three uh calling you that's the only part you don't set support yes yes yes you're right so as the next step what we need to do is we need to if we to add dynamism we need to essentially look up for a particular service uh then get the endpoints convert it to yang and then the moment you convert it into yang it will be available as rpc endpoints from then on you would invoke it okay so guys again one basic question the nsm to nsm that we show here they are also driven by the crds right um no no this no this is a g rpc it's a protobuf describe uh api true but that g rpc protobuf defined api the model is still described in a crd right no no okay the crds are used um as we as we seen here in the in the other document in the beginning i don't know if you were at the beginning of the discussion but essentially they describe the the service level registry yes for the network services network service endpoints network service managers so these are the three components or kind of tables that we keep in effectively kubernetes hcd and where we keep the records for our services and points and majors i see okay yeah bear with me my dumb question is why is nsm to nsm not defined again via crds you know i i know the the protocol and cap is was there a good reason why like if you see what train was explaining he's basically modeling those endpoints as young and they become a rpc call away right yes but i mean the the api i mean fret maybe maybe you have a better explanation here but my explanation is that the protocol is is a little bit more than just describing the endpoints i mean you the part of the protocol is negotiating between the three nsm for example the vni's that we use uh for the vxtown of the tunnels or all these things they're part of the the negotiation here so i i'm not sure if you can i mean yeah that's kind of negotiation going on right if you have a better answer yeah yeah and so in terms of the crd we primarily use it as a as a registry and in fact the registry itself is uh is access to a grpc we actually don't access the uh the crds directly so uh so that that means that you're you can have something that's not part of kubernetes or a non kubernetes network so the first part is that we want to be careful not to tie kubernetes as the as the underlying required thing and when you deal with crds you're now bound to that so we have that one layer of abstraction the other problem that you run into when you start looking at uh grpc in this scenario right not here but when you start looking at the crds is we we're designing this for for trying to gain like a very high uh scalability and so if you look at any nsm uh an nsm manager it all it doesn't have to know about the entire world it just has to know about its connections like who am i connected to what can what connections do i have what what real mechanisms do i have what local mechanisms do i have what resources are they connected to and so uh from a scalability perspective that's okay from a from the distributed system but when you start to scale out at much higher higher rates uh we we don't want to start putting in status information or negotiations directly into the crds themselves because uh at cd is already a strange resource in in kubernetes and so if we start to add in a high volume of connects of connections uh into at cd uh then we will very likely run into scalability issues from the from kubernetes with crds so we have to also be careful from from that aspect got you got you yeah i agree with the accessibility aspects as well yeah makes sense i was just wondering if there is a modeling language on the kubernetes side that can be described as for example the odl is defining it as a yang you know but that's okay you know a protocol level interface if it is well defined also that should be good yeah i'm i'm not familiar with any modeling language at this point so if i'm not mistaken um that yang grpc can run over pretty much anything right it can i saw yang is just a modeling right the payload part of if you use rest it will become json if you use grpc it will become protobuf right so yeah absolutely you're right you can transport yang modeled objects on any interface rest grpc yeah and i think grpc is super lightweight i think it's prone to be i think now literally uh highly efficient right highly efficient yeah yeah scalable also yeah yeah exactly exactly sorry guys yeah bear with me no no it's actually very good yeah so in fact i was hoping now i think so uh just on the uh probably pre maybe one way to if you're going broader and explaining this i would actually say you are not actually reverse you're doing something of a more static setup probably i thought maybe a better description because if you say reverse then it sends doesn't sell the i mean i would say the explain the value of your demo right all that you're doing is basically static and dynamic is just one step away right i mean i would i'm actually tending to think more more that way i think we lost prems audio he probably stepped away okay okay he stepped away yeah okay we'll talk to prem offline on that yeah but you're right rambi you know it's that's a better way of explaining yeah exactly exactly because i don't want to i mean there's so much value coming out of the effort he's putting i don't want to devalue that right by saying reverse means that people okay is it like not even full enough no all that we're doing is static right phase one implementation of ensn that's exactly exactly yeah yeah yeah and we should put out this slide and then maybe and we can even talk about some of the apis right yeah so if we get back to the original question by you rambi so essentially we have to put somewhere here own up i guess that that that would be a big question that people are going to be interested in yeah so what i'm thinking is like you can say like um i mean basically uh on app or it could be osm or whatever i mean all they need to implement is this ensm service right so basically um you know as part of their implementation and there we go right so basically uh expose um nsm uh apis and i think i pretty much all said we can say hey here's a simple starting point for you on the apis you know basically if you're doing vlan base you can just start there and we can cover the predominant set of use cases i can let me put together some slides around it i mean just basically take this i'll put some story around one app i want to try to generalize as possible to other efforts one app and osm and everything so rambi um similar to what prem is doing from lumina from vmware point of view do you guys have any good um or clear idea on how mware use cases and fit into ensn so at least um um one idea this is uh from our side is not about ensm but um just uh delivering the interconnected cells like for example within a cloud or across cloud delivering the interconnect itself using nsxt right i mean nsxt as a cmi sort of that's what we have in mind but you just have to put it together got you so then that will more fit into an sd van style use case right correct yeah sd van correct sd van would be a good use case or it could be just service chaining it's pretty generic yeah but basically come in and solve the uh multi network problem multiple network interface problem right right so generically speaking a distributed edge uh exactly yeah yeah yeah the use cases we have been talking about correct yeah okay that's correct so um on this um uh can we take uh i mean some of the nsm i mean basically the nsm protocol right take at least one case such as vlan vxlan and talk through a little bit and see if something can be done on the data plane what do you guys think at least one case i want to go with some plus one and see on the data plane side if there's some sort of negotiation possible right and talk to a little bit i'm not sure i get the question no so uh whatever stinking um was nicolai was like basically take the nsm g rpc apis right i mean take a look at them and then focus on one like the simplest one the vxlan vlan case right and then yeah and then see whether we can trying to find where i'm not sure i can find it uh fred do you where the api should leave uh the registry remote yeah probably this one remote yeah take a take a look at that i i usually just look for the dot proto in the uh search and then i look for the right one yeah network service what is this service request but this is the request not the uh connect connection context no this is the monitor no okay i'm gay i do do do you want to do this now because okay we have 15 minutes but uh i mean i was just thinking because the data plane i do think that simple vlan is possible uh at least i would like uh since the team is here at least sort of see what is possible because i'm talking about a long way the data plane because the data plane is is the different thing so the data plane uh api is uh is this one here so the nsm talks to the data plane so this is this is the the data okay so okay so in this case so basically no i want to look at the control plane because that is where the we so let's let's uh my my use case is super simple all i want to do is like when a new connection comes like i want to assign a new vlan to it i'm not even getting to vx land very simple can a new vlan be assigned to it and both sides agree to the same number and go program it like you know a new enterprise customer comes comes up new vlan and done boom and then can you stretch it on both sides so i have a good start sorry i'm back so i have a question so uh okay so when we talk about data plane i was trying to drop parallel between our ns nsm i mean data planes of same domain right but when we so essentially you inject those interfaces into the pod and then make it work right whereas the same may not be applicable when we talk about nsm because they are quite disjoint world um so in this case uh we have to probably look at the use case in a different way right uh because even though the tunnel is there but uh one for example if it's a vx land tunnel one side is essentially being injected onto the uh pod but the other side uh i may or i may not have the ability to inject those for example supreme let me make it super simple for you super duper simple like forget so basically take a pod which is implementing sraov right forget all the overlay everything sraov so what happens is when a new enterprise customer comes in we add a new vlan right so what we do is on the sraov next side we add a mac address with the vlan so that packet can be directed right but and you pick a vlan number let's say 100 right and now on the uh switch side we want to make sure the same vlan is being programmed that's all we're looking for it's really simple i'm not even getting to vx land let's start small i mean they both agreeing to the same vlan and going and programming for a new enterprise customer very simple let's start small okay then we can build on it like build on vx land all these things can be easily built once we have something basic running so all that we need to do is in the control plane side make sure that we agree on some vlan number like sort of i think right now we go ahead i think we have some sort of assignment right happening like you know how we agree so is that possible in control plane or what is possible today so that particular example ramki the vlan is very localized to that physical port over there it's it doesn't need to be globally unique you can call it exactly so the port comma vlan uh on the physical network side need to match up with your sri v mac comma vlan correct so there are two cases um funny i was thinking one is the l2 l3 l3 your spot on uh you know that's an easier one to tackle but l2 means then you have to have a global vlan i mean id but l3 yeah spot on like very simple it's a local vlan significance and that's it that's all that even that's a very good starting point correct even for l2 ramki we shouldn't assume the vlan is global for overlapping vlan use cases you know when you go into ground field deployments you will uh get into scenarios where the vlan number is already used you know but doesn't but you might want to fit into a different broadcast domain you know as a result a particular vlan number on a given attachment point could get connected to a broadcast domain which is using different vlan numbers at other attachment points you know so it's very important for multi-tenancy i'm presuming that's important goal here as well correct so basically you're right i mean we need uh so the brown field would be more of we need definitely some sort of vlan translation and all those ready when an l2 case that's correct yeah just don't you think that we need the nsm control pain to be aware of what the vlan tagging is available and or the vlan number is available outside for a global value yes it needs to be aware absolutely you know but if you model in particular example is a local attachment point from the pod to the physical network uh then you know um still the nsm needs to be aware it needs to manage that local namespace and and kind of do the mapping between that local namespace and the global namespace so it needs to be aware in in short you know it's just that there are more details involved we could start with a simple simple simple example where yep the local label is a global label as well you know we can solve that and yeah correct that's sort of what i was thinking funny and and i mean l3 is now probably the more common deployment model right and finally it'll be probably more local simple local and layer 3 unfortunately ramki you know i do i have been seeing a lot of layer 2 use cases also um yes layer 3 is needed right but they do at a service level they are still asking for layer 2 so sure no far i'm not now we should we should definitely look at both yeah yeah yeah so excellent so uh with that in mind so basically um so nicole fred from sort of the control plane perspective what is there some numbers being negotiated let's say all we are looking for is simple vlan connectivity between two end points or what gets negotiated so i mean i know the type gets negotiated think yeah i think that that uh the general rule of thumb that we use uh in nsm is that the initiator uh just asks for things and then the decisions are taken on the um kind of kind of termination site like on the i see it would be the end point side yep there's a nuance to that um the initiator can set some boundaries yeah of course so um for example the initiator might say um i i support vxlan but please use one of these sets of parameters uh and set a constraint and then the the receiving end the end point or the one that would there would that would receive the request uh you can select out of that out of that list as well so it's not like it's not like the the client has no say in it as well yeah of course yeah yeah so the uh so the initiator setting and a control so basically assuming these are all local i said there is some local number generation just going back to the simple vlan case um so um bed so how do we program the range or how do we know i mean i do about to start how does that programming happen between the initiator and the responder so the protocol actually starts from here like right you have the this rpc code request which holds the network service request and it gets a connection right no um nicole what i meant was we were talking about the range right the range to use so how does that the endpoint know which range to use how does that get programmed i will tell you i mean it gets oh okay uh so so that would be negotiated here between the the network service manager and the the data plane it should be a configurable range that's what i meant yeah the where does that configuration come from yeah in general we should just push that to the administrator it's simpler for the design we should just assume a range of x to y and we allocate that um yeah there are scenarios nicole where you know if you want to make that local namespace as global uh there will be scenarios where you'll get into conflict so just give it to the administrator it's it's the survey um yes the the the only thing is what what the administrator configures and to the best of my understanding the administrator would be able to configure the the data plane actually so the network service manager doesn't doesn't have the this notion from what i understand at all right nicole so it doesn't need to be a data plane aspect it's just a configurable vlan range right um i mean not every vlan number in that range will go into the data plane uh it's just a namespace you know a range uh that that is required for us to make this whole plane work yeah i mean i think it should go into xcd that's what i mean you know um yeah exactly it should it should i mean ideally as part of the service i mean for i think each interconnect in the service is an ideal scenario minimally starting point putting for the entire service then we can say for each connection in the service then we can have different ranges at the next level i think right yeah i don't think that we have something similar today but it's there's no problem into initiating a spec and trying to to figure this out and i'm sure that once we figure it out yeah just to be on the same page this is only for an sri u use case right whereas in case of a uh non-sri ov then it has to be a vxlan to vlan mapping correct so yeah yeah so b and sri ov with vnf is very common right yes as you very well know yeah i mean i just want you to start small very simple no no no so so guys even in the absence of sri ov so prem let's say you're doing non-sri ov it's a regular interface to the underlay network even in those scenarios you cannot assume that the vxlan will start on the server you know there will be deployments where you will just do dot1q vlan to the underlay and the underlay will be starting the the vxlan tunnel so to the in general we vlan way of signaling from the server to the underlay network it will serve the purpose not just for sri ov but even for the non-sri ov use cases yeah i think so one thing what i wanted to probably discuss funny is that in the non-sri ov cases right so today what happens is the usage of vxlan makes it simple because you essentially inject one of the endpoints to the service and the other one to the client right um because vxlan provides programmability right so but if you have to play the vlan part right so vlan essentially what happens is vlan was essentially meant for a physical port which means what i'm trying to say is from a kubernetes perspective i cannot really create vlan for a for a service or a part right so i see what you're saying you're saying from underlay as well as the overlay perspective i was more looking at the nsm from a vxlan or overlay perspective and i assume that the vlan that's a good point that's a good point cream the from that point of view a pod is basically looking for a subnet that it can insert itself in and participate in right that subnet in in sri use cases can map to a vlan as an attachment point or when let's say a v switch which is doing vxlan on the server itself there also if you think about it the connectivity to the pod will will be some kind of a virtual port and that will i mean you can model it as a vlan or you can model it as a generic subnet uh i understand that the vxlan will have to convert it to a vni interface to the pod you don't want to push vni also on the pod pod better be generic enough that it can operate in a let's say a segment routing environment or a vxlan environment or or anything like that right so you should generically model it as a subnet or a vlan that's how the pod will interface whether it's a specific it's interfacing with or a virtual switch that it is interfacing with what what i feel i feel only that lower level number of v of vlan attack all that one will be dependent on configuration at lower level so i think at higher level you should have some apis or something to lower level which one will make all these things transparent at higher level that one will simplify the whole thing that's what i think yeah absolutely absolutely and and and and and ramki are you really seeing sri with container pods with vnfs i know they are pretty prevalent but with containers um definitely something to i would say seriously think about uh because i mean basically from a performance standpoint uh uh you want to do the right thing right so yeah um it's not like as prevalent as vm's right but you realize right you have to load the sri v nick driver into the container oh yeah i know i i completely understand no it it doesn't come for free you're right but um i think the flexibility of containerization kind of goes away you know that's why um i know so it's almost like i think the other way also to think about it it's like uh hey pod dies in the kubernetes world it doesn't matter it'll get why you don't need to inject the driver you don't need to inject i mean you you can hand out the the driver on the host and then just inject the linux interface correct nichole that's what i was proposing in my in our last meeting uh container should never see a pc i interface well it depends on what the content i mean if it's a for example some form of dpdk application why not no even for a even if it's a dpdk application you can interface with a vhost user or mmyf right it's not necessary you need to have a pc i interface the polmo driver will run on a vhost user as well it's a shared memory interface that's far more efficient than uh trying to walk through the pc i bus and understanding those you know i mean the the mobility of the shad memory comes with the sharing issues right funny it's like i mean i mean that has its own trade-offs correct it's not a full slicing it's a zero copy what do you mean not what i meant was the shad memory it's not an exclusive shad memory for this container it's basically it's some shad pool across containers so basically you have to look at it that way right correct yeah that's the same problem when you have vm's in the picture and when you have a dpdk based implementation on the v switch it's the same scenario ramki here instead of vm's now replaced it with containers and the memory management you need to do that's actually going to be a differentiating feature or if i may say based on the sla based on the qs requirements of the container you will have to do a better memory management but if you do the full slicing with sra will be of course the pain points are there you have to expose pcae when you expose pcae but you get the full slicing like that's another way to i mean i'm not saying that's the best solution one other solution direction so when i was having detailed discussions with horizon this is what we brainstorm we i said look do you guys envision the number of containers to be exactly number of virtual functions on a nick let's say those those are going to be 256 then yes you can go ahead with sri o v invest your money and so on but if you will be having any number of containers or parts which are let's say more than 256 more than the number of virtual functions let's say there'll be in thousands then you know how do you you you just cannot map it to the hardware or a physical function or a virtual function right how do you solve that problem so in those and they do have such scenarios not just that that carrier other carriers also will have that you know in that scenario you have to do a muxing and demuxing that scenario will be forced to do some kind of a buffer management between multiple containers trying to utilize the same single hardware resource so that that's a very far analysis and we also heard i mean i'm not saying the every number the typical usage would be like maybe 100 pods 100 unique pods per per node i mean basically per host this is i mean again one reference point i'm not saying this is the only reference point but yeah you're right i mean basically we need a family of solutions correct yeah i mean depending on and they're always straight off right correct right other other useful point i learned not from Verizon but i can name the it's better i don't name them but the point is let's say you're terminating the tcp and udp on the container meaning the destination ip is meant for the container ip then the number of connections there are going to be very large in number right and there will be large number of containers running on that server node as well but let's say you're implementing like a firewall cnf you know you're really not the termination point for the ip but you're a transit node but you have some policies that you need to enforce on the traffic they in those kind of vnfs or cnfs uh they don't see more than 256 on a per server no that makes sense yeah so i mean both scenarios are valid and and if and we don't need to over design on day one itself so we could start simple with each cnf can have its own virtual function we can solve that problem um uh and and and then go for the next one if you guys want to keep it perfect we are almost out of time sorry i have to jump on to another call um but um just um so this is excellent um uh nicolai would you be able to share that the nice slides you have um yep there yeah okay i will send yeah and i think this is very good so basically we have also now gotten much crisper than on especially the data plane right so at least we have i think a story lining up on the vlan uh case and prem we were just chatting that probably for your messaging i think you're doing much more than the backward initiation from odl it's more of really actually you're almost there you're doing starting with static connectivity then the only simple next step is dynamic probably that's a better way to position it yep i agree yep all right cool thanks thank you excellent thank you all so much uh yeah well uh so i will update all we need to do is the do the google calendar update right correct yes i've given the permission to update it um oh awesome yeah i will do uh yeah nicolai please share the slides and i'll also start working on um sort of bringing that uh on app and odl together one app sorry other thing are the way some i added the slides in the meeting minutes okay perfect awesome thanks bye thank you thank you i couldn't thank you very much