 from IRC on your phone or your iPad, you can do that or not if you don't turn it on. So cool. Awesome. So let me go ahead. Do we have someone who would be willing to share the agenda with the group? Actually, let me go ahead. I'll just go ahead and do that since I happen to be running today. For those of you who don't know, we've got a lot of folks who are new to the call today and I'm delighted to have you. Normally, Margaret Kautz runs the meetings and he's very good at it. So if it seems a little bit like I'm catching up, it's because usually we have really good meeting runners running the meetings. So one second while I share. Awesome. Cool. So this is, I just found out just before this meeting that Kyle wasn't going to be here. I thought he was running the meeting. So the agenda may be a little bit like today. We may have to make it up a bit as we go along. So quick agenda bashing first. Are there other things that people feel we should actually have on the agenda currently? I was about to add one item, but I can do it right away as we currently speak speaking. I don't want to stall the meeting. Yeah, please feel free folks to add your items. Yeah, I'm going to do it as we speak. Go ahead. Start discussing the other ones. I would like to actually add the testing thing. I see Michael on the call with exchange for emails. So can we look at the CNF testing point? I don't know where it fits in the agenda today. We absolutely can. Would you be willing to hold a few bars about that, Michael, and talk with Matrix on about that, the VNF CNF testing? Yeah, we can. We can talk a bit about that. That's fine. Awesome. Anything else that folks want to add to the agenda? Sorry. I'm adding interaction with Kubernetes KS network policy. That's an interesting topic. So definitely we'll get there. Anything else before we start diving in? Cool. So ONS Europe is going on next week and we do have a talk about Kyle and Frederick. I also expect there, I think there's an uncomfortable that that will be happening that involves network service mesh and rumors are as soon as Frederick hits the ground and finds a decent bar that there will be something put up about the future network service mesh happening. So if any of you guys are going to ONS Europe, there should definitely be lots of network service mesh stuff to engage with. And the place to go for that is we have our network service mesh webpage and we have an events section and the page for ONS Europe that will be updated as more of the sort of supporting collateral and other things come online. So do you expect to have, you know, I'm writing there on Tuesday. I have a talk on Thursday on the IO and Kubernetes. Is there, do you expect the guys to actually have some sort of a meeting point that will be live all the time or what would be the best way to hook up apart from, you know, IRC, but will there be some sort of continuous NSM thing going on or periodic daily or twice daily or something? I know what was discussed last week. I don't think there's a continuous track going on. There's the talk that's being given. There is an un-conferencing held that it's been put forth as an un-conference topic to the un-conference section of the program. And then they will, once they identify a bar, they will be talking about doing a network service mesh happy hour. Just to give you some idea. So what is happening in the bar? What is happening in the bar? You mentioned, yeah, you said once they identify the bar, there will be something happening, but your audio is being scrambled on my side. Network service mesh happy hour. So let me sort of show you what happened. Oh, okay. NSM happy hour. Okay. When we did the open source summit, you know, all the various supporting collateral went up. There was a happy hour that was held there. They captured frequently asked questions. They haven't yet updated the event page off the network service mesh for that information yet, but that's great. All right. Okay. So there will be happy hour. So there will be a meet up point in the bar and it will be live every day or do you know? I expected it will only be one happy hour for the event. Yeah. Hopefully during the event, not before the event. Yeah. Yeah. That's my expectation. Okay. Thank you. So there are other events coming up. I do expect that we'll be doing a lot at KubeCon for network service mesh. I know a lot of people submit talks. And I know that one of our major goals is to be able to demo a network service mesh at KubeCon. And there are several booths, both project and corporate, that will be very interested in having hosting such a demo in their booths. Cool. So really quickly, we do actually have a GitHub project that we will typically use to sort of comb through the issues. So let's go take a quick look at that and see where we stand with issues. So on the to-dos, we have issues. So we've got the traditional X-Factor CNF. I don't know how many folks are familiar with 12-Factor apps, but one of the things we identified is as we gain operational experience, we need something like that for CNFs that sort of describes, okay, if you want to make a cloud native CNF, this is what it looks like. The migrating of errors to go errors, Frederick has got that in progress. We have an ongoing conversation about becoming a Kubernetes working group that's been stalling a little bit as we've been sort of rushing forward with development. But one of the things that we've got going on is trying to figure out what is the proper formal home. When we asked Kubernetes SIG networking, they suggested we become a Kubernetes working group. So Ed, you said it's been stalling. Is it because lack of interacting with good Kubernetes community or they're pushing back? I mean, which side is the stalling? The stalling basically comes back to we're trying to draft the proper set of collateral with opening the proposal to Kubernetes working group. And in the course of trying to draft one of the things that's come back, and this is where it gets to be a little interesting, is SIG network would like us to be a Kubernetes working group. Kubernetes likes us, but they're trying to make working groups just to be about producing spec docs. And of course, we're producing code. So there's a little bit of confusion as to how that sorts out. Okay. I was just curious. Yep. No, generally the Kubernetes community could not possibly have been more supportive of us. So Ed, just if working groups do specs, who does the code? Yeah. So in principle, the code gets done in SIGs. But again, it's not clear how all this settles out, because one of the things that SIG network was very happy about was that we thought in all we didn't need changes in existing Kubernetes. But that also means that we don't necessarily fit in quite the normal way as the Kubernetes SIG. So effectively, we're doing things they like, the results of which is that their process no longer is a great fit. Okay. Thank you. So interestingly, for example, the resource management working group actually produces code. So I think it's not probably a standard rule. I know that for sure. I totally agree with you about that. I think what part of what's happening is they literally had no standards for working groups before. And so, and now they're, they're trying to write the standards and run the standards which we put forth is that working groups don't produce code. So it's all gotten sufficiently weird that what's happened is what happens normally in these situations, which is developers just because that's more fun than wandering around writing documents. So we should probably get back on that. Cool. So there's efforts that's going on around documentation infrastructure. I think that's actually, it should be closed out like the go errors. I think it's been done, the documentation infrastructure. I think we do have docs on the website. The how to get a privilege container. I don't know if that's been done. The stop relying on host name to identify pod. I need to find out what was going on there with that with Frederick. So there's also, and this gets back to, and I think we're missing a lot of the Volk folks because they're en route to ONS as well. Actually, we have Watson here. So Watson, do you know what's going on with the support CNCF-CNF project? Did you guys get together the kinds of stuff that you needed us to do or? I think Taylor's still trying to put together. It's really just going to be, end up being an update for Dan for CNCF, yeah, for the CNCF. So yeah. Cool. And then John, the separate out concerns for the audience is MSM to make it more accessible. Did you have some particular thoughts on this? Let me go ahead and open that one since you're here to speak to us. Yeah, I kind of said there, I think there's three audiences we're looking at. One is people developing NSM framework code and APIs, then people developing plugins, i.e. here is a network service for IPsec or dispute switch. And then as people just use this and say, oh, there's a dispute switch out there. Good. I can just plug it in. And it kind of goes to some of your presentations about the end user who says, oh, I want this and I don't want to know what and to did to make this happen. Just give me enough information to do this. And I think those three audiences need to figure out how to speak to them separately. And I think we had some discussions before about the current code base, the way this laid out. It mixes it all together. You can piece it together if I get an IRC and ask you or Kyle or Frederick or Sergey, it would point me to where to look. But there's not a, you know, how do I enter this for one of those three points? Okay. No, that makes total sense. And it sounds like it's sort of two things you're suggesting. One is documentation, which is the audiences. And the other one is perhaps, you know, moving the code base around a little bit. So it's set up a little bit better for consumption by those three audiences. Yeah. If I wanted to drop a plugin, I don't want to touch any of the framework code or any of the directories where the framework code exists because I don't want to mess it up. No, when you say a plugin, are you talking about writing a network service endpoint? Because you mentioned earlier. Yeah. If I want to take what Sergey did to his test data plane. His test data plane, I think, touches four or five directories in the current tree. And all those directories are, couple of those directories are part of the core NSM framework. Okay. That's it. So it would be nice to have a different way. So when I enter the code base and I drop a PR and say, here's a new PR. Here's, you know, a dispute switch. Everybody knows it's not affected any of the existing code. And then I also drop in to say, oh, if you want to use this dispute switch, here is, you know, the YAML file or the YAML documentation to go and do that. Yeah. So you're almost asking an NSC directory. I know you're working on something like this as well, Tom. Well, I mean, the data plane is not a great example is because basically we haven't finalized that yet the protocol or protobuf to talk to the data plane. So that's why there is a kind of a simple data plane protobufs in the plugin tree or in the NSM package APIs eventually it will go away or it will stay, but it will be common for every data plane plugin. So basically you will not need to do any changes there as long as that protobuf gives you what you want for your NSC or a data plane or whatever you develop extra to the framework. Yeah. I'm not commenting where we are. It's more to say this is directionally where I think we just think about going. I think that's actually exactly right because I think what we would like to be able to do is have a clear place and a clear procedure whereby people can contribute a data plane or a network service endpoint to, you know, and also sort of clear traditions as to then how those get consumed by people so that it gets to be really, really easy for people to contribute those components because those should be relatively modular components. And then, you know, even beyond that, we may also want to offer places for people to contribute ENSMs, external NSMs or PNSMs, proxy NSMs to do some of the more sophisticated things. But I think some of that's going to have to emerge as we sort out a little bit some of the APIs between those layers in the system and we're still figuring some of that out. Yeah. I know I figured it out. But I think if you set the kind of goals as how we want to evolve and I put down those three three audiences, feel free to change them, modify them. I'm not saying they're the beyond end all, but I think we should think about how to address different audiences. One of them, maybe one thing that could help, like we have scripts to deploy an SM in the CI environment, but we have very little how to deploy an SM in the product or in the lab environment. So maybe it's something like a Helm chart, which would go and deploy all the required pieces could help people to consume it easier. I think that would probably also be useful. So definitely, I mean, you definitely want consumption to be as easy as humanly possible. So I appreciate you bringing this up, John. I think it's actually exactly where we want to go. John, this is actually a very good one. So just with thinking on the API, should we also talk about other relevant text on an API such as Kubernetes network policy? I mean, essentially at that point, then the audience is really looking at a big picture, right, beyond an SM. Yeah. So we do have that on the agenda. Let me bounce back as we're almost finished running through the Kanban board. And then we can come back to that on the agenda. That's cool with you. And I was not pushing for that. So all I was saying was like when in the end, when you have the context of API's, then people also want to know the bigger picture. So what is NSM API versus Kubernetes network policy? And, you know, when we get there, I also talk about the service message. So there is a little bit of overlap there of how this is all coming together, right? So the people are interested when you're looking at API's, right? So that's what I meant. Totally. Cool. All right, then. So there were thoughts, the L2 forwarding example, forwarding with the VPP example. Tom, I know you've been working on this as a network How is that going? Yeah, I don't have a pull request yet. I have been looking at the API doc that Sergey prepared and merged and looking at, so I'm looking at two things simultaneously. What, and this is relevant to the discussion that we've just been having the last 10 minutes. Well, what additions to the API doc do we need to deploy? If not quite data plane, something focused on the low level delivery of the actual resource that the NSM endpoint needs to in order to construct this L2 forwarding plane. And secondly, and we have some of that definition, but there's a few things that are missing or could be augmented. And one of the things I was thinking about was the possibility of there's other communities working and with CRD's and like Intel is developing apparently. I know they're not all yet there, but I saw them in some referred to in some preliminary slides for another project that's working on Kubernetes and they're developing CRD's for core pinning for things that you would need for SRIOV or more for BM focused stuff. And whether we need, we have to have a way since we're orchestrating this through NSM to specify what resources or coordinate the resources that the NSC provides and whether that NSC can utilize in the demon set or the init container the standing up of some of the CRD resources and whether and how much of that information we need you know propagating through NSM. So I've been kind of thinking about that as well. This is really like to do this really requires both the code and some adjustments of the protocol and you know ultimately I'll get this done and we may end up being something like what Sergey did and that it would be part of the tree and not really a clean division of leveraging other CRD's and so on so forth the NSC. I'm sorry go ahead. I think we have to be really cautious about because I've been telling the resource management thing quite a lot. Our research management working group quite a lot and so we're wandering around through that stuff and there's a bunch of things we will eventually want to get to in terms of NUMO zones and that kind of stuff that are going to be important. But the only one of those proposals right now that has actually been accepted in any way shape or form has been the CPU pinning proposal and that one actually is very doesn't have any CRDs associated with it really. So you know there's not a lot to do there. I think we need to presume on the presumption that some of those NUMO sorts of problems are going to need to be solved at some point but that they're going to be solved in the context of the resource management working group in the same way rather than us inventing our own solutions to them. You know so for example when I schedule an NSC and I want to do you know core pinning if all I really want to do is core pinning and I'm not associated to any particular hardware right then that's already there. If I would like to consume SRIOV and therefore I want to be in a particular NUMO zone there's active debate about what the right way to do that is but that's going to be worked out in the context of resource management as part of how the pod spec works. So that's not something we would manage ourselves directly that would have to do with how you deploy the pod for your NSC. Yeah exactly so basically these two can coexist in parallel. One aspect provides you the resources that you need so that could be another CRD, another controller that deals with what pod requests in terms of the performance or CPU pinning or other things and we provide just the kind of a network plug. I mean the wish that these are two separate aspects I think and it would be easier and more clean to keep them separate. Yeah I think you're right so but what and that's what I was hoping I would hear and there seems to be general agreement on that is that we leverage what's being done elsewhere in the KADS community but I think it's I just am trying to think this through and how much I need to actually make a concrete because part of this effort is to make a concrete example that will actually forward packets so I guess the worst comes to worst we'll just hard code some configuration stuff and the example in the you know in the code just to you know set up the detail for example for VPP just set up the details of configuring the VPP in the pod. Okay I mean one other thing that we may want to that we that I would actually encourage folks to do is to get more involved to the resource management working group. Yeah there's a lot of folks in the research management working group who are very very smart about Kubernetes and cloud native and there are some folks showing up who are very smart about networking but you don't have a bunch of people there who are actually really this sort of taste across both and we're blessed to have a lot of those people participating in our community and so I think it's helpful for us to cross pollinate there a little bit you know just so that we can actually represent okay yeah we really are working on network things and we really are trying to do it the cloud native way and you know you'll represent our viewpoint. Yeah I think I think so and I think there may be I'd be happy to do that and I think part of what I'm trying to do naturally feeds into that so you could put a in the chat and I think there's at least one person here who's also involved in the multist project and I don't know whether they're taking the same approach or not but they in some ways they may be if anybody would want to want to comment on that I know we're fundamentally different from multists but some of the bottom level stuff in terms of the resource definitions are I think simply shared so I think you're absolutely right. Yeah I think this is extremely well said so in fact I was a participant in resource management working group like almost six months back then I got into other things but yeah well said and in fact we are looking at the layer three cache partitioning as sort of a advanced resource management construct but we tabled it you know feature the cloud that could also be useful for network functions but yeah I think this is spot on. What's the what's the sorry question and what is the use case driving the LLC partitioning? So this is essentially a higher degree of performance and isolation for network functions so basically if you have sort of a network function which wants some guarantees say as an example 5G you know packet processing function right you know UPF right and then you have some other network function on any other workload which is more best effort right so the idea is basically you can create some partitions for the cache so basically you download only a section of the cache for the best effort workload right whereas the the guaranteed network function gets the entire you know cache. Okay and who is managing the this allocation. So yeah exactly so that is the part which is still working progress in the resource management group so essentially the idea was make so you'll be pre-constructing those cache partitioning and then let the cubelet manage it right so it's still working. I'm very much interested in the detail would you mind sending me a pointer to this resource management group activity so I can educate myself thank you. Yeah and I'm interested in that as well Sergei so on the basis of all of yours and Ed's. Who is the favor and add that to the meeting minutes? Yeah yeah yeah sure yeah. Do we already have an activity or this you know the the work stream within NSM tracking that because I think that would be an extremely interesting area. But I would love to have it because among other things there there are some things that there's some really good things happening in the resource to manage working group conversations but some of them my dad is telling me don't go far enough so for example the conversations around the NUMA manager they would work just fine if you're if you're a pod that is going to grab this but if you're grabbing grabbing multiple mix across multiple NUMA zones you may have asymmetrical distribution of the cores you want there for like literally no story for that and I think that's what I'm thinking of is like what about the circumstances and I I don't want to sound like our project is tight or prejudiced in favor of VPP in any way but I have to have an example to talk about the concrete if I'm asking for a port on a data plane and I'm I'm the first one to do it my NSE I may actually configure other that have to configure the the VPP and and but someone else may be able to share the same configuration if they're also like looking at a at the at the same at the same data plane and they may not know that they're looking at the same data plane all they're looking for is a high level service that or or just a layer two connection right and if it's on the same pod we can logically infer that it's a data plane but it but it may not be just those kind of things so that somehow or another when we register and we know that there's that there's one of these that are available we don't you know we just coordinate efforts between between the very the NSEs that are you know that may be co-located that's just what I'm thinking of this stuff is actually really important it's also very very technically detailed what we may want to do is get together sort of you you mentioned logic we would be interested in pulling together a group to go and look at some of this I know Romkey you've put a lot of thought into this and so has Tom you guys want to start you know sort of putting together a work stream with Liase resource management working group well what what exactly do you mean by a work stream what what okay I thought you meant some kind of formality of tracking with documents or you know until we have something only if you guys find that useful okay all right hey Tom yeah what you said before have you thought about how to implement implement a multist type interface in nsm well I I'm trying to get away from trying to duplicate because multist is specifically about extending extending the the the IP address space for multiple multiple IP addresses spaces within Kubernetes and I I want to just go on a different path but look at what multist needs to do to provide that and and if I look at some of these slides they talk about well you know some of the same things that we're going to have to do like these dcrd's which don't exist yet so I'm trying to look at one level below that because I think ultimately there'll probably be something of a meaning of the minds between these various projects that are trying to enable high-speed networking in kubernetes but but so I'm trying to be aware of what they're doing but but not get bogged down in and discussions of multiple address spaces because we think we have an alternative solution for that um that's that's just the thinking I don't know whether that answers your question John or not but um that's just where my thinking is for what it's worth yeah kind of a little bit more I'm just trying to think of if we wanted to have you know multist seems a simple use case I want to have another IP address in my pod how do I do that in a different channel effectively the thing that multist never talks about is why do you want another IP addressing your yeah it's always a why and it's never talked about in the context of multist and we're all about that why because we will happily give you another IP address in the pod that actually meets the why you want but nobody nobody wants another IP address in their pod they they want to be connected to some set of network services right and that's the place that we tend to focus our attention right but underneath the covers they're going to have to deploy a crd that may look a lot like ours that's that's a bit different that's where they we might come together they might right they might they might so anyway um back to the agenda so we did have so somebody put on here reviewing a poll request 291 let's go ahead and take a look at that yeah it was me I basically that's the um the API refactoring so folks who develops nsc or plan to develop nsc it's a great opportunity for you to uh chime in and see if proposed api changes actually gives you what you need to develop your nsc because all right sir j does this go hand in hand with the api document um that was merged I don't remember the poll request at the moment yeah exactly so basically it implements the doc with the slide modifications I hit some issues and they were required very minor changes in the doc which is already uh it's already in the updated revision of that doc in the same pr so you can see what was changed it's very minor in general it's it just implements what we discussed they agreed on nsm underscore apimg file yep so I had a chat with kyle and irc yesterday about nsc's and I think he and I agree but sometimes it's always hard to um an nsc is really just a process which could run as a standalone pod or it could run in the demon set data plane or even one in the workload pod so I say that and then you know so kind of I kind of agree on that I'm not sure mdl agrees or sees it so I mean I tend to think about and this is this is probably the decent segway towards some of the conversation about the arc docs so I made an attempt to try and get us to a logical place here um and this was built off of some of the serbia dotted it's not quite not quite finished yet right but um one of the things I realized is we we tend to focus quite a lot on the kubernetes case the kubernetes case is hugely important and it's actually really fun um but but you you're eventually going to discover that you need to do things like reach out and talk to external components that are outside of your cluster as network service endpoints and so I sort of talk a little bit about this in in the generic right so you know look when you look at the network service mission components in the abstract and then we go down to how we look at them in the cluster right and in the abstract you just have some network service client that has an l2 or l3 connection to a network service endpoint right and you know you talk about what is a network service right it's you know the abstract representation of something that you want you know it could include all kinds of things like isolated resource access protection for threats guarantee bandwidth load balancing proxying you right and then it's very focused on the payload right so you're actually focusing on the payload not the interconnect when you talk about the network service and then the network service client is anything that wants to connect to that so an example would be a pod which wants to connect to a network service you know and then talking about all these things sort of in the abstract and this gets to your point about the network service endpoint right in the abstract a network service endpoint is just something that provides a network service to which you can get an l2 l3 connection from whatever the client is whether it's a pod or something else does that help start some of this out for you john yes i mean i start to make sense i mean it's i'll just try to think of implementation here and i'll just doodling think of because it there's a lot of discussions about how to manage network service endpoints but if you make them a part of the pod part of the nsmd demon set managing becomes easier it adds some more complexity but i just want to make sure if i did something along with blinds i was not violating any architecture principles well that's actually part of why i wanted to write down the abstract the abstract description because you know because we have been focusing on a particular instantiation of these things very hard and i think it's a good one and i think it's good to have focus because we're trying to get into a place where we can show things that are working to people and get more people involved you know and but and and in different ways that you may realize that are going to have different pluses and minuses and you just have to sort of figure out are there use cases that make sense given that so for example you give one example of running a network service endpoint in the in the network service client pod right there are some advantages to that if you have one network service endpoint per pod right because you life cycle them together that that's convenient if you have one if you have a network service endpoint that serves many many network service clients right many many pods then that of course is a terrible solution right and it really depends on the problem you're trying to solve but i i think one of the nice things is the framework is actually um sufficiently flexible that you can explore these possible solutions and some of them will make sense and some of them won't and most of them will make sense within a particular problem domain you're trying to solve makes sense yep makes makes sense but that's what i was trying to think about i didn't see it called that explicitly so i was just i was poking to see where i i very i always appreciate you poking my friend uh if you go through the nsc go uh code which is uh kind of a sample nsc provided with the ripple i mean that gives you an idea of the interaction between the nsc and nsm because you will in either way you will have to your nsc needs to talk to nsm so the rest of the cluster would know that you exist and what service you provide uh and uh that's pretty much it so as long as there is a that piece nsm plus your nsc existing somewhere advertising then the rest of the communication and the model is fairly flexible to achieve that yes all that said thanks i was trying to poke at it one comment i will make for people who decide they're going to go read through this this pr um the the stuff in the list of components is actually pretty well done so far when you start getting into the stuff where we're talking about like the nsm to nsm api in this pr that's still somewhat of a work in progress it's sort of firming up a bit um but it's dealing with some of these things like oh wait we have to negotiate tunnel parameters back and forth between nsm's yeah like like someone mentioned sla for example you know um i i don't i don't know yet what that should be added but we need to think about that an sla is a tricky one because you know speaking as a network guy and i know the other network people can back me on this um when you talk about sla across the entire network that's a very hard problem um you know a lot of people when they talk about bandwidth reservations and criminalities they just mean you're guaranteed a slice for the nick that goes out of the node and that's not so hard but if you want to know i've got a guaranteed bandwidth between point a and point b that that gets to be trickier exactly it's actually a really an np hard problem so right and you know smarter people than me have made a good run at it in the late 90s um yeah i know so id one quick question here um so uh wondering i mean if um somebody's looking at it slightly in a different way in the sense hey i have some i've already just predetermined my overlay communication mechanism hey um you know i don't want all this negotiation i just decided to solve the xlan so do we have a way of i mean and and there's already an implementation around it um do you have a way of plugging into it yeah so i mean basically if you have a network service data playing so think of it this way the network service manager actually doesn't do the data playing bits right it doesn't it will be talking to one or more network service mesh data planes um you know that that are available to it and those network service mesh data planes will have capabilities right so if you have a network service mesh data plane that can only do vxlamp then say say you know the sm on the the leftmost uh you know network service you know the leftmost node here basically it only does vxlamp well then that network service manager when it goes to try and request a connection the only mechanism that it's going to advertise is being usable by it is going to be vxlamp and then the network service manager is talking to is either going to be able to do the xlan with it or not right because you can't ask people to do things they don't do so you know you can bring whatever data plane actually makes sense for you there uh it's just going to do whatever capabilities it has and those are going to be the capabilities you can do okay so i mean we're got it so basically um basically what you're saying is there is an existing plugin it can just come and advertise what it wants and then then it's all good that's it right if you want to use the xyz data plane and the xyz data plane only does vxlamp then okay great that's if that's the only data plane the network service manager has access to then that's the only thing you can actually ask for and that's the only thing you can ask for um i i i expect just having spent a lot of time with service providers who have a wide range of variety of things they want i expect that that you know that the ability to do more than one data plane is going to be valuable or more than one under overlay is going to be valuable because i don't know about your experiences but i have service providers who want impeal us over gary impeal us from u3 srv6 mix the n gpe i even have some who want impeal us over ethernet gotta help us all right yeah eight i'm completely with you so all i was thinking was sort of the both the ways like sometimes essentially just use one implementation in some part of the fabric and maybe there is some portion which is sort of you know maybe another tenant we're using sort of a different one and uh so basically it's a mix and match of both sort of more with exactly the nsms with some nsms with exactly the same capability other you know more negotiating there could be possibilities like the deployment possibilities and that's a fair amount of the effort that i put in just trying to specify the nsms to nsmi api is sort of the negotiating of the remote connection mechanism um because there's no negotiation just independent of what happens on the cover of these nodes which i think are going to be quite a bit more flexible you and i both know there are going to be physical boxes that can speak to one or two things they can speak to yeah so all right cool um um awesome anything else from any of this stuff um before we bounce back um the edwin if i may ask one quick question how do you do the where is the registration for the data plane implementation so i missed it maybe we're getting there so the art doc is okay okay okay okay okay okay and this is you thankfully surga it was you surga has been very insistent about we have to start writing some of this shit down and i think he's a hundred percent correct so i'm trying to write some of this shit down and a little bit of a crazy so sure i know how comfortable would you be with me sharing this with people at on ask if they keep asking about architecture specifics made it frederick that's awesome um so here's what i would say about that which is the stuff that that is in from my perspective and i'd be curious to the people's opinion the stuff all the way down through the section about sort of the network service components and kubernetes that i feel pretty solid about right that's stuff we've talked about a lot um when you get into the network service mesh apis um the nsm to nsm stuff it's not it's quite a bit more solid but it's not quite fully solid yet so just as an example um one of the things that isn't here yet is the negotiation of addressing and routes um isn't in here yet okay can i i'd like to make a recommendation which is uh can we commit the parts that we are comfortable with and then continue the parts we're not comfortable with in a separate pr um as long as you give me a brief moment to go and correct the one incredibly stupid thing that surga already told me to correct around this dash um no rush and then i want to put in sort of a very clear sort of demarc somewhere but yeah yeah and i'll go through it as well to make sure like that it's that i think it's it's clear as well before i shared with anyone uh but i just want to like be able to point people towards something if i if like if they want to get really into the weeds or something like that um then being able to point them towards this as i hear start with this and then kind of talk to me and we'll talk about how to get you more heavily involved because i may run to a couple people like that well and one of the things i'm hoping comes out of this is once we get a little more solid architecture i hope it gets a lot clearer the places that people can grab ahold of and do stuff um you know it's it's you know surga has been amazing in his willingness to wait in and write code based on slides um it's been amazing but it's asking an awful lot so it's cool can we prioritize surga to be a slight to um co-copyler i'm not sure how surga feels about that um yeah i have a full-time job so and it's not this one we we appreciate all that you do surga cool so uh anything else on this talk before we go back to the agenda yeah just where's the location of this doc i mean i know you're talking about committing it but if we wanted to glance at it i mean it's here 290 290 okay i think actually here's in the meeting that's there is uh there is an uh an nsm api dot md doc in docs and i believe this extends that too so you can go back to the original and you can see what the you know what the poll request is is is trying to modify this patch is building on that definitely so yeah and i think it's in the meeting that's there so you can pull the link from there um cool anything else before we go back to the um to the agenda cool so apologies ronki i skipped over your interaction with network policies uh let's get back to that so uh it's we have seven minutes left and we have frederick i don't think you will be able to to cover last two points in seven minutes so can we quickly revisit the um the next week on seo stuff with frederick i'm afraid to talk about it if that's what you want uh yes yeah you mean what's going on with us yes cool yes specifically yeah meeting points like several of us will be there in person right we can meet and discuss there at onus i guess yeah okay i won't be at onus but frederick will and tile will so you're in great hands okay yeah so so those of us who are attending like me you know i scheduled in advance as the calendar is filling up quickly so is there any chance to work out the time and space for nsn interactions i'm very interested to talk about you know everything that i care about in essence so for cloud native networking the current state of the work general general discussions testing benchmarking comparisons and so on would that be possible next week so what i'm thinking is maybe we'll just create an email thread of the folks are going to be there in person and then sort of the next change times how about that perfect works so we also have on the network service mesh.io website uh events page so we're going to be setting up we just looked at that but it wasn't updated sorry frederick yeah yeah so we'll update that so if you don't so if you don't have an answer now we'll keep watching that spot okay thank you well um and so we're we're definitely talking about meeting up on wednesday night somewhere but i want to look at some of the venues first to make sure that they will suit our suit our needs and also to make sure that they're open and not entertaining a private party and so uh so we'll post somewhere on wednesday we'll have uh same as we did in open source summit we'll do a nsm happy hour thing and we've also requested a unconference session on thursday as well so we don't have a time for that just yet and that's still that's allowed us to be accepted and they work out the schedule during the conference itself is my understanding but ideally we should get enough uh people wanting to join in that we should get the unconference session on thursday so we'll also post that onto the uh onto the website at the events page once we uh once we know that time that would be awesome and the only request i would like to place if that is at all possible for those of us who suffer from jet lag and health and try to do their best in healthy lifestyle my brain stops working after eight p.m yeah no worries um and as wednesday by my time schedule i have three talks scheduled on on wednesday but all thursday outside of the unconference session feel free to get a hold of me and kyle will be here as well and so we're more than happy to to help awesome awesome awesome i only have one talk on thursday so thursday may work thank you cool so for yeah for details that's that's all i have at the at the moment of things that are that are set so we're running up against the end of time um i do want to ask the following question um since onus is happening next week do we want to have a um a network service special meeting next friday i don't know where everyone is in terms of their travel plans and everything else um i'm i'm available and i'm perfectly willing to have one the question is where is everybody else well from my perspective i would really like to have a conversation with michael and the team about the testing and bunch marketing that we didn't manage to cover today and onus is finishing on thursday i will be available on friday so you're not going to either michael so you're you're on normal normal schedule yes i am awesome i should be able to get to it i should be able to get to it provided i can find a um a hot spot awesome so let's go ahead and keep it that Frederick it's a networking summit they are expected to have a top wi-fi no that's what you would think but uh they one of our part of our challenge is uh is to get internet through challenging conditions and so it's also a test isn't it a problem of too many shifts trying to work on the same the same dish yep come on and as soon as they quit paying the bill they shut all that stuff off so it's going to be solved so come on all right so i kind of we're kind of done for the week we want to make sure we get back to the two agenda items one on the case that we're policy and actually the other thing i see up testing next week um and then i'm sure we will have more agenda for next week as well cool excellent thanks very much everyone thanks show thank you thank you thank you