 I grabbed at least the very basic agenda and kind of plopped it in there. So I'll let you take it from here. Cool. Thank you very much. So getting started, let's do some agenda bashing. So does anyone have anything to add to the agenda that is not currently on there? Well, I sort of took a freedom to add the basic use case. So that's there. Feel free to cross it out if there is no time. I think we'll have time. The agenda seems a little shorter than last week. So I think we should be able to get to it. Also, I think Prem should be coming back soon as well. And once Prem comes back, then he'd be a really great person for you to get to know since he's doing a lot of the use case documents as well. So just for your information. I know I've been away. I got distracted for a few months, but I've been here before. So I've looked at Prem's doc already a few times, and I have lots of questions, but I'm a simple guy. So I'm starting with a basic one. Thanks. Cool. Okay. So starting off, for those of you who are going to be in Vancouver on August 28th for the Open Source Summit. So the Open Source Summit runs from Wednesday to Friday. But the Monday and Tuesday before, they're running workshops. And on the Tuesday, there is a cloud-native network functions seminar, and we highly encourage anyone who's attending the Open Source Summit if you're able to get to the network functions seminar to sign up. And so the way that you sign up is when you are registering for the summit, it'll ask you what additional workshops you'd like to attend. And I believe there's no cost for attending the cloud-native network function seminar. There is a cost for some of the ones on Monday, but the Tuesday cloud-native network one should not have any cost to it. And that's in the afternoon. Who's running it? Can I ask if there are some more detailed post-its anywhere? There is information, if you click on the schedule, in the, that's in the meeting notes, you'll see that it's, you get some of the information on it. I believe it's ran by two people. So one of them is Arpit, who runs the Linux Foundation Network Group. So basically like their umbrella organization that has like seven projects. And he's the person at the top. And also another guy named Dan Cohn, who's the executive director for the cloud-native compute foundation. So very good people to get to know. And definitely people who have a lot of sway in the Kubernetes and cloud-native and then working communities. So sorry, I've got another question because I quickly read what's there. Is anybody from this crowd attending? Yeah, I'll be going. Yeah, we'll definitely be attending. Just to sort of put a fine point on it, that workshop is not going to be a presenting kind of space. It's going to be a seminar, which means that it'll be very conversational with a lot of back and forth with the audience. So in some sense, to be in the audience is to be part of what is going on there. Okay, that's actually what I meant. So, you know, being active, proactive, loud, occupying airspace, whatever. If you are in there, I'm sure NSI will be. But so because the topics is clearly very interesting and projects are too. So do we expect apart from this crowd here, do we expect a lot of folks with, you know, the CNF dear to their hearts, talking about the actual real problems and ways they are approaching this problem? I would expect them to make sure. In other words, do we believe that it is a worthwhile place to be in? I suspect that it'll be a mixed group. So we're not the only group who's participating in it. And so the fact that both Arpit and Dan are both driving it. And so for a start, I'd be very surprised if there was not a large presence from the Kubernetes network SIG group. And you also have under that umbrella, you also have the CrossCloud CI, I have an O-NAP, FIDO, Open Daylight. So I mean, some pretty huge projects that are represented under their fault that deal specifically with networking. And so I'd expect there to be a pretty diverse group there. Yeah, I think it's going to be a very worthwhile experience, frankly. It's actually, you know, quite frankly, it's a big part of the reason I'm going to OSS this year at all. Okay, okay. Let me check my schedule. Thank you. That's good feedback. Thank you. And here I thought you were going for my talk. I did not. I forgot the fact that you were giving a talk because this has been filed under Go for this summer. No offense. No worries. So, okay, so there was an action item as well that was assigned to you at about asking Tim a question about sending an invite to SIG networking. So I managed to do a different action item that's later in the agenda. I have not actually sent that ass to Tim currently. I should get that done. But I did ask about sort of where we should go and formal structures within Kubernetes and that's a little further down in the agenda. Okay, let's leave that one on me for next time. The, yeah, that's further down in the agenda. So we'll definitely get to that. Okay, so next question is, do we want to cancel next week's meeting since it is the 4th of July holiday in the United States? So for reference, the 4th of July is on the Wednesday. Our meeting is on Friday, American time. So does anyone have any opinions towards whether we should cancel or not? Well, I put this on the agenda. This is just something I discussed the previous week. I just thought I could help there. I personally won't be around next Friday, so I won't be attending. The call shouldn't happen. But I was just curious if a fair number of people weren't around, then you might want to consider it was all. I think most things are canceling next week, just because of the bizarre placement of the holiday in the US. Yeah, yeah, and I also want to be a bit careful not to feel like people are pressured to show up to the meeting, just because we're holding one as well. Yes, exactly. So I'll be working next Friday, but for what it's worth. So if there's a meeting all attended, if there isn't, well, then I won't. Yeah, me too. But let's give Americans a bit of the slack. They don't have that much of the holidays. I'm one of those, but nevertheless, I will be available next Friday. Tom, I think you're proving his point. There you go. Thank you, Carl. So what's a holiday? Yeah, that's right, Fred. Check the Oxford English Dictionary. Do you remember those days where everything gets very quiet and you get a lot done? Ask Webster guys to fill in the gap in the dictionary. Okay, so you're talking about the days when the previous jobs when my boss got sick and didn't call meetings. Okay, yeah, I got it. Yeah, I think I personally would be more comfortable. I personally would be more comfortable cancelling it just so that people don't feel forced to come who are going to take long weekends. So, but what do you think, Ed? Should we cancel it or should we leave it on? Yeah, I'd be really inclined to cancel it for a variety of reasons. Okay, well, let's call it cancel. The only problem that we're going to have with cancelling it is that the person who can cancel it is on vacation right now. We could put a really big note at the top of the meeting minutes. Yeah, that's what we should do. Yeah, we'll just put a big note with the date and say meeting has been canceled. Yeah, I'll reach out because the prime is the one who owns that calendar event. And so I'll send a message to him, but yeah, that's a good idea. Let's stick it in the agenda as well, just in case. And we've got 15 people who have heard the news firsthand. So that's pretty good. Yeah, so I mean, if you want to show up, feel free to and you can have a conversation, but yeah, there won't be an official agenda. Okay, so this was pretty exciting. So there's some discussion about becoming a Kubernetes working group. And Ed has all the details, so I'll yield to him. Yeah, so there's a standing item where we sort of ask the security guys, hey, what do you think is the appropriate formal thing for us to become in the Kubernetes ecosystem? Should we be a significant sub-project? Should we be a working group? And so we took this question to the Signet Working Meeting yesterday and Tim was pretty forceful that he felt that we should be a Kubernetes working group, which I'm completely fine with. And he talked me through a little bit of the sort of where do we go? What do we do? How do we match with how all that stuff is written kind of thing? And so on my to-do list is to sort of get a PR going where we could basically submit in order to get that wheel rolling. I'll probably end up reaching out to folks on the Network Service Meshmailing List just to give you guys a pointer to it so you can comment and we can sort of get it converged a bit, but that's actually very good news. So you must be like an avatar that's becoming a formal working group. Yeah, one of the questions that Anna and I had spoken about really early on was about what is the best way to engage the community? And being a part of a group like Kubernetes or CNCF or so on, we can see that it only helps to drive people towards looking at the project and contributing. And so and also the more people that we get that we get and will we get more use cases, we better understand the problems, work out where our holes are. And so there's more structure that would get put on. We'd have to work in with their release schedules and pass information up about what we're doing on a regular basis and try to probably relax us a bit for us because of how new it is, but give them a roadmap and they ask for up to a year. But where will we be in a year? That's a pretty open question at this point. And so... Well, so one thing to keep in mind with that, by the way, is there's a bunch of stuff they say there that's listed that they're looking for in terms of questions we should ask ourselves. But when you look at the actual approved working group proposals, you'll, for example, they are much simpler than at least what I initially imagined from reading the democracy how to become a working group. One thing I do want to be careful about is us making our lives unnecessarily difficult. So I'm drawing a lot of inspiration from other successful working group proposals. So yeah, so a little bit of overhead, but overall, I think it should be a good move. So are there any questions or concerns or comments? Nothing other than thanks, Ed, for chasing that down. And I think it's going to be pretty exciting to the project. I'm hoping so. It does look like a good idea. All right, so a question for those of you who are involved. We need to start adding our images to Docker Hub and to start getting them automatically built and published. So would anyone like to volunteer to be that person? Yeah, I added this comment because I was facing some issues. This is Prithik, by the way. So I was working on the sidecar thing and for that to get automated, it needs to pull the image from somewhere. So right now I'm just using some fake image which is already there in Docker Hub. So it'll be good to have, I saw there was a PR merge for the init container. So it'll be good to have all those images so that we can directly pull. But you know, to use the init image, you don't have to have it on the Docker Hub because it gets built during the CI. So it's stored in the local Docker storage. You just need to refer to it. That's it. I mean, it's not as convenient, but it's a workable solution. Is there a reason we are publishing the init container to our Docker Hub? Like we're publishing other things? I don't think we're publishing anything to Docker Hub yet. Maybe I could be wrong about that. Yeah, we're not putting any images there so that's the time. So if I have to like you wait, let's copy the image to the code first and then Yeah, I think deploying to Docker Hub should be, or there's a couple of other repositories as well. But I think picking one deploying to it should be a good thing. And if I recall, there's a there is, I believe there's a Legato username that we could have the option of posting to. Yeah, I do have a repository for us under Legato network service match. And you're right, there don't appear to be anything, doesn't appear to be anything there yet. But I do actually have that, you know, this is why my brain was saying I thought we were doing that. And it was because I went and got the place to put things, but apparently do the work to actually put things there. Yeah, there's a, so Docker Hub will, you can enable a trigger that will read a Docker file and will build it. And so that's probably the mechanism that we should use. Cool. So at this point, I see we have three container images. One is for NSM, another one is for the unit container. And the third one is for the web mutation server, which disappears. So there are three images. That's awesome. Okay, so I'm going to stick that on. And actually, Adam, Ed, you're probably the only one present that has access to it. So That's certainly within the realm of possibility. Yes. That certainly could be true. It depends on how much presence of mind I had and when in terms of adding other folks. So, all right, well, my username is the same as my IRC name. So if you want some assistance with that, I can do that as well. I mean, independent of assistance or not, I think we should probably get a few more names on it. So. Cool. Okay, so there's a request for adding a make file for creating images and binaries. So I just wanted to get the opinion if it is fine to add that because there are multiple the command directories we have where we can build those binaries and images. Yeah, I personally, well, before I give my request, does anyone have any comments about make files? I vaguely recall this a few months ago when we started this. I believe we decided not to have a make file, though I cannot recall why. Yeah, I think part of the way it started was that when we started out, a make file was just ridiculous amounts of overhead for a very simple thing, right? And then we transitioned to using Docker files to build things. And it made a lot more sense to have the build be Docker oriented, where you would get Docker file, you know, and so I'm not, you know, I effectively moved a lot of what you would do and make into Docker. And so I think part of it is the question of, like, how much complexity do we think we need? What would we put in a make file that we don't currently put in the Docker files? And or would a very simple make file make things somewhat easier for some folks? Yeah, we do have a set of script files. So you run like scripts and there's a build and so on. They can be ran that will initiate it all. One option that we have, though, is a make file is probably more discoverable. Like pretty much every custom or every major tool with the exception of idea based IDEs that's how to properly use a make file. Yeah, my tendency is that I think I'd be completely comfortable with a very simple convenience make file, something that went and ran the Docker builds or ran the scripts directly. But since what make is really good at, which is managing dependencies, you'll go has largely aggregated the need for because it's really good at fricking building systems. I would tend to want to keep it as a simple convenience make file. Yeah, so let's go and add one. And it'll just be a make file that calls the scripts and that calls the build script. And then let's leave it at that and not add anything else to it. That way people can do like meta x make or colon make and vim or so on, and just have the tool run it automatically as well. I think that's a good idea, Fred. And also, it will sort of flush out issues with things being missing in the scripts as well. Having a top down star and it makes it more accessible. Yeah, and at this point, adding anything more complex than that, I think, should be rejected. So just literally the minimum number of lines we can get away with two or three lines of make file. Yeah, that was my initial idea behind this to use make file to call all the Docker command so that we don't have to play around with multiple scripts. And we can have simple make commands. And once we create the documentation around how to get users started, so that will be an easier way to get in rather than going to different directories and running those scripts. So yeah, make file, I agree, should only be calling the Docker base scripts and nothing else. So it will not be handling the dependencies. That was my initial thought process. Okay. So go ahead and if you want to take, would you be comfortable taking that on as an action item? Yeah, sure. I can take it. And who was it that was taking it on? How do you spell that? Okay, great. Okay, so action items for review. Let's see, let's go ahead and take off that top one. So we've been working on inventing a character for Ed. And I believe we have a candidate, but before we commit to such a candidate, I want to know if anyone else wanted to create a character for Ed. This may turn into a puppet show, so pretty exciting. You know me well that Frederick not to tempt me. You really know that. So when I say that, I don't say that in jest. Again, I know better than to tell. So anyways, does anyone else want to participate in that? What are the roles of engagement? Come up with some type of a, I say like give a good example. So when we're looking for some type of a mascot or, you know, and the two mascots that are, let's think a little bit like when you're thinking of mascots, like if you look at the Go mascot or you look at the Linux mascot, like there are things or animals that can, they can do things that can be actionable. Like the Linux one, the artist who's a artist from Texas A&M who submitted it in said that he wanted it to look like like a very happy penguin who had just had a large meal of his favorite fish. And so, but if you look at it, it's like, you know, there's, it can do things and it represents the community. So we want something that represents our community and can do things and it's active. And so, but also, you know, friendly, you know, so, so that's basically- I'm carrying resemblance to it. I don't, I don't know how excited I became about being a character for Ed Problem. This is a mascot for a network service. It became a character for Ed, and then it turned into a mascot, I think. The reason why is because he said he wanted to have a character for a book or for a- Oh, first for like a story like the story of the bee for Kubernetes. Yeah. And then, and then it grew from there. So this action item needs to be renamed to create a mascot. Okay, well, how about we give it one more, one more week and then after that, if nothing comes up and then we'll go with a mascot that Ed and I can select. I'll rephrase that. That's an excellent AI for the 4th of July holiday. I think so too. So that means two weeks, two weeks from now. Is that, is that too long Ed? I don't think that's too long. You know, we can sort of see how it tricks out. Okay. So next action item. So, Tom, you were going to look at some documentation. I remember you asked some questions as well. So how was that going? Yeah, it's coming along. I don't have it done yet. I had about half the week worked on it at the most. And I, you know, I'll generate a pull request when I get to mine defiles together. I just got text files all over the place at the moment and want to make sure I can reproduce everything in my VM, you know, in a controlled environment. And, you know, I'll reach out to you and others when I, if I have some questions. So far it's just, you know, there's a lot of very basic startup stuff that seems to be missing in the existing documentation. That's what I started with. But as soon as I've got something concrete, I will submit a pull request. Okay. Fantastic. And just to heads up a future action item, which is not going to be on Tom, it's going to be for someone who has not gone through the network service mesh project in detail to go through the documents and give the, give the documents a spin and give feedback as to... Well, I'm doing that. I have some changes to the documents too that as I found mistakes, not mistakes, but things that have changed. I guess since the document was written in inconsistencies with file names and things like that. And I checked that I fixed in my local versions as I went along. So I thought that was put, that might be part of the effort, but yeah, absolutely. Everybody should look at them and then we'll merge all the changes. So, okay. So action item for Taylor to document CNCF, CNF. I'll let you speak, Taylor. Hey, can y'all hear me? Yes. Okay. I started working on, I guess the document, what I need to do is move it over into the Wiki, create a Wiki page. I think that was the next part. Pulled together some notes on how the cross-cloud CI portion could help with some of the testing on Kubernetes clusters and then the actual CNF project which is the comparison project. Probably two pages for this. A question if I could interfere. Are we going to maintain some Wiki pages? I just sort of was under the assumption that everything was going to be marked down. Maybe I'm confusing that with discussions in other groups, but just a question. I mean, that's what I was going to assume with my stuff as I was going to put it in an additional markdown file as well as some changes in the existing markdown files. I don't know. So, there is a Wiki and it's made of markdown files, so you're good. Fantastic. Nobody likes to edit Wikis. But everyone is at least made that. So, on the GitHub Wiki, is that where you all would like me to put the CNF items? Yeah, that'd be a fantastic place to put them. You should have access to edit them. So, if you have any trouble, let me know. Sounds good. All right. So, Prateek, I'll let you speak. Hi. So, I created the Wiki document documenting all the components we need to build for adding the sidecar admission into the pod creation. And I also created a PR page on that which is a work in progress. I still have to add some more code around that. So, I feel to take a look. I'm not sure if Wiki allows to add comments. So, you should have a way where we can submit a document and people can comment on it. Yeah, that's a good point. We'll have to take a look to see how GitHub handles those issues. For the Wiki document I created adding sidecar containers. I don't see an option that is for me to add any notes. It gives me an option to edit but can't. Maybe what I didn't do is I can add this document to a Google doc and share it around to get some comments. Yeah, that's a good idea and then we can transcribe it. What about creating a doc folder in the repo and basically do a PR with whatever the document you have in that doc folder and then we can actually comment and then when it's finalized it can get merged into the doc. Yeah, that's a good idea as well. My hope is that GitHub Wiki would have some of the GitHub standard features in it because at the end of the day GitHub Wiki is just a Git repo. The fact that it seems to be missing these features is highly unfortunate. Yeah, I'll take a look around if it doesn't let me add comments and then I'll create a docs, a doc document in a docs. I'll go along with whatever you want to do. Yeah, the only downside on well one of the downsides on adding it into the docs is that means that the hurdle to add to it is a bit higher but that might be okay if it enables us to have reviews. So think about that a little for a little bit. We'll add an action item to return to that to work out where we should place these documents in the long run. The wiki is a branch on the repo so it may allow pull requests. I thought it was an entirely new wiki or sorry an entirely new GitHub repo because the URL is different like you to download the wiki you do git clone github.com slash org slash project dash wiki dot git. So it has a different URL. That's new to me. It used to just be a branch on whatever the repo and you'd go to that branch and any changes you push up in that branch would show up for the wiki. If it's a completely different repo then that may even be easier to do pull requests. A docs folder is totally fine though if we just want to do markdown files it doesn't have to be on the wiki. I like the docs folder and markdown files personally but but I think at one point Frederick had said that we could we could convert the wiki eventually to to markdown files in a doc repo. Yeah well it isn't that markdown a good starting point though. I don't know. The content is not really the problem itself so in both scenarios it'll be the same content. It's just a matter of where should the documentation live. Should it be part of the main repo itself and part of the idea was that there's some information that is that it doesn't really matter what version it is you're using. Oh yeah. The information is relevant but there's nothing wrong with saying if you want to see that information look at the latest release branch or master branch as well. That's perfectly reasonable thing to do as well. Yeah it's a good point about versions. So okay well I'll add an action item to me to look up to see what remedies we have for this and we'll definitely come back to that. Yeah in the other projects I have seen people follow this approach. They add it to the docs and then once they finalize it people have resolved all the comments then they move things to the over to the wiki. Yeah Kubernetes uses the same that they have a community repo where all the proposals are stored and then there's a discussion happening over there. So it's pretty much common. Okay. There are lots of good ways to solve this frankly. So let's just sort of pick the simplest one and go. Yeah so what am I sure that we get to the use case as well that there was listed below. So I'm gonna go ahead and then move on. So I'll take a look at that and then we'll revisit. So let's we had an action item as well for people to look at getting involved with the pod 2 NSM API. I won't discuss that in detail at this particular point other than to say that we have something that's been merged and that is what is there is not set in concrete. So based on the use cases we're happy to change things to accommodate. So but it's there for people to take a look at. Okay for issues that have occurred in the past or that have been closed in the last week we've had we've actually had a relatively busy not from the issues side from the pull request side we've had a pretty busy period of time. So we've added a new CRT handler interface. We also added an object store so that the CRT objects have a place to actually stay and live. And there's been improvements on our logging. So we're moving towards at this point we're moving towards log risk and having a produce I believe the the plan is to have it produce JSON log files that can be ingested by through Fluent Day to a stack of your choice. Sorry I have a question about the log risk sorry to interrupt you. I mean maybe it's just lack of my knowledge but I mean one of the reason why I never use log risk is because I never managed to get in the message which generates the line number from the source code. Like when you debug and it's it's it's less convenient with the geologue when you get a message with the name of the source file and the line number where that message was generated. That can absolutely be done with log risk and in fact it can be done in a way that's more convenient because you can essentially have a JavaScript attribute like lock for source or whatever you choose to call it that contains that value. So when you're dealing with something that is you know as Frederick was saying JavaScript aware log digesters of various sorts you can pull that out more easily. So you can definitely do that. I've seen it do it myself and do we definitely want you doing that totally? If you could if you could provide an example that would be awesome because it's a bit painful without this. Yeah no no what I'd be wanting to do is just push something that makes it trivially easy to do right. What you really want is you want to be such that every time someone is logging that information is being logged without them having to actually do something to cause it to happen. Yep. So also there's been work on getting plugins to become idempotent. So basically when you call init or close multiple times so this is for plugins that depend on other plugins that things don't blow up on you. So trying to make a plug-in dependency an important step in making plug-in dependency management easy to handle. We've been adding we added the init container which we discussed earlier earlier as well. We are also adding config map parsing code. So basically config map is a configuration that's stored within kubernetes and then that information is pushed into the container and we're parsing that. And so on the agenda for the next week as well depends on so we had kubernetes 1.11 that was recently released so we've also are getting things set up for that migration so when client go cuts a branch then we're also going to be moving the project to 1.11 so just a heads up. And I want to make sure we get to the to the use case so I'll apologize if I get your name wrong guys a mess here. Yes try again. Okay so joking. Magic is fine. It's not an anglophonic name and it's also not a francophonic name it's a polish name so I'll spell it phonetically next time. Cool well you have the floor. Okay I only have like four slides and they posted at the link so I don't know can I can I share a screen so you can see my mouse. Sure. Okay all right let's see it works. Okay you should be able to see the online version of the slides. Okay so I think I've been on one of the calls here a while back and I got distracted and I'm back and hopefully I won't get distracted again but I looked at the slides at that I think you put together or whoever specifically the distributed CNFs distributed bridge case and as I'm a bit allergic to L2 I thought why don't I look at IP. I also looked at the use cases document but I realized that I'm a bit behind so I'm gonna play a catch up and if what this following four slides if they are basically me you mean that I'm barking up the wrong three just just feel free to shut me down at any time. I have used the the slides referred to and I just you know replace L2 semantics with IPv6 and IPv4 and I called it virtual routing and forwarding VRF. Not a distributed router not a virtual router but I just you know I use that name. So if you are familiar with the slides that I referred to earlier you should be the distributed bridge then your brains must be now also very familiar with the chronography use on those slides so thank you Ed. And the problem is very simple you know similar to the distributed bridge is just that the pods don't want to connect over the L2 bridge network so distributed bridge networks or emulated LANs but they like to connect it to get connected over emulated VRFs and I'm not calling it a VPN because the the routing plane the control plane in the network control plane routing is not part of this use case it's actually orthogonal to it. It's really connectivity of the the pods to the IP forwarding instances whether they are public or private doesn't really matter they are clearly you know logically isolated and in the way the the this distributed thing is implemented is also you know using some sort of IP tunneling and like in the in the case of a distributed bridge VxLan was referred to and the other tunneling technology is here you know VxLan GPE which is a new draft that is going for the ITF and basically adding the the protocol field in the VxLan header and referred to as VxLan GPE or GRE or some other you know IP over IP or IP over L2 or MPLS encapsulation makes sense so far okay now here is where I may have got things wrong sorry yep was that a question no it makes sense to me good okay so looking at the you know definitions network service it's you know the the name is VRF0 selector up is VRF0 and the pod the the pod instance that is actually providing the service on a specific note is is basically labeled with the up VRF0 and it carries the name of VRF0 pod in terms of the channels I have put here the the name of the channel the VRF IP6 as this specific in this specific case the distributed verb is serving IP6 IPv6 payloads and but you know IPv4 is another option or if the dual stack is supported then IPv4 and plus IPv6 and in terms of the the NSM way again thanks Ed it was a bit of the replacement here the only difference from the from the distributed bridge is that the pod that is providing the service here is basically providing VRF0 a service not a not a bridge zero service everything else stays the same and in terms of the a distributed implementation we can have one can have those VRF0 pods living on notes and then they you know connect by magic and one you know if from the data plane perspective it could be VXlan GBE tunnels between each other in terms of you know the address addressing IP addressing management and IP address provision and into the the the pods that are requesting the service as well as the routing part is is orthogonal and out of scope for NSM cool so so will this work should work I mean it should work just fine obviously it's up to whoever is deploying the VRF0 pods to figure out how they want to get routes and things like that you know just like it's up to whoever would deploy a VR0 pod how they want to get you know what they want to do about things like ARP and broadcasts and bridge tables right if you were just fine okay very good it sounds like it's building a sort of a namespace support which is for the pods so you can have a multiple namespace kind of it's providing a distributed VRF service so namespace from the you know the IP namespace perspective sort of distributed yes so one of the things I struggled with in some of these is who supplies IP address do we use the do we do we reuse IP address from Kubernetes in the pod or do we create a new IPAM or code names so I've actually been thinking about it this way right and this may not be the only way to handle it but there's a ton of use cases where this is the thing that makes the most sense right and the way I've been thinking about it is this if what a pod goes to connect to a network service endpoint right which may also be a pod um the for example a pod reconnected the VRF0 pod um then it should get its IP address and possibly some routing information should come back from the VRF0 pod as part of setting up that connection because network service mesh doesn't have any idea what the right IP number is or what prefixes should be sent to VRF0 but the VRF0 pod has a very good notion of that right it has it has a very good idea of of how VRF0 is handling IPAM and a very good idea of how of what prefixes would be prefixes in VRF0 yeah so yeah I fully agree I mean in fact for the specific case say with IPv6 basically VRF0 pod will emit an array to to the connected pods and and and then use the the array-based address allocation to to allocate addresses so if if that's okay with with a group here I'm very happy to dive a bit deeper I'll try to avoid the HCP for now at least and we'll look to what degree we can use an existing you know well-known and standard IPv6 mechanics to handle this specific problem so we're saying that this would be completely orthogonal to the IP address space in Kubernetes unless there is something going on in the VRF0 pod that makes it non-revolvable yes yeah that's where I sometimes struggle is how does the interaction of Kubernetes happen we're creating an overlay of pods which is good and then we are we are similar to the L2 distributed bridge we are creating here and an overlay L3 network yeah yeah it's almost like a you know distributed VRF distributed DPN it's it's nothing to do with the Kubernetes network per set it's almost like a private connectivity over you know some IP yes oh I see I see the case I'm just trying to think of how Kubernetes would will have any interaction should it have any interaction and at some point traffic has to go from I would assume wants to go from Kubernetes to this distributed overlay or not yeah very good very good I think that's actually the the case where we would need to address the external connectivity is required yeah so I would say there's the simple case there's the easy case in my mind there may be other cases right I don't think of everything but the simple case in my mind I think what you're getting at is where do we get the IP that's used for the tunnel here right where do we get the tunnel at IP oh the tunnel address space I think yeah so but the tunnel address space are you talking about the tunnel address space for the outer header or the inner header outer header yeah so the tunnel address space for the outer header there's there are two things that are running through my mind one is um you could essentially get it from the normal Kubernetes networking uh space for the outer header um that's one possibility um the other possibility that occurred to me so then part of two two addresses from Kubernetes or we use a private IP address not necessarily it's going to vary somewhat depending on the data plane right because keep in mind the thing that has to terminate the tunnel is going to be the data plane yeah it's just over an interface so um you know effectively to a certain extent it gets delegated to how the data plane is dealing with it um and so in the mechanical sense I would expect for example if you are setting up Kubernetes networking and you have a kitter and you have a data plane you might might want to set aside some number of those you know some address for tunneling that's one possibility the other thing that I think is interesting because it's semantically meaningful is you could imagine a situation where you need to be tunneling via a network service right so for example imagine that I have radio network right I have physical links connected to the radio network and the network service I'm trying to reach is actually only reachable via the radio network network service right so that's a little more complicated scenario I haven't thought it all the way through but um I am aware of it does that make sense um but the way I think of this and maybe I'm I'm wrong here is that this this uh VRF is really as Bacic explained is really agnostic to what the tunneling mechanism is and the way I thought of it is that the tunneling mechanism is or the tunneling underlay is actually another negotiation with the network service manager and another provided function that will set up that tunneling network and that in turn will will know about the IP yeah there there are a bunch of different options there and I think they will vary somewhat depending on the data plane um so you know I think it will vary somewhat depending on the data plane you're dealing with because different data planes will potentially want to handle it differently um so just so from from from that I gather sorry um that's this problem has not been addressed yet for this distributed bridge thing correct no the the outer header IP for tunnels is not been explicitly addressed yet okay okay thank you all right so just uh just heads up we're over the uh the scheduled meeting time so we can still have discussions afterwards but um just need to close up the uh the meeting so again just uh just remind people no meeting next week the next meeting will be on july 13th and uh thank you everyone for for attending well thanks thanks bye everybody also so I have a few more minutes I don't know if anybody else wants to keep discussing this thing because I would like to actually explore this verve case a bit further and if the pattern also applies to to the distributed bridge um for the outer uh think um I think I think Ed may have disappeared uh Ed's has disappeared yeah we can do it over email no problem I'm I'm I'm I'm still here um did what I say make any sense it yes it does yes it does but the the the question comes down to uh IP address allocation for the outer for the outer for the underlay whatever whatever the tunnel and that needs to be for sure for sure coordinated with with Kubernetes because it's a Kubernetes cluster that we're running in that this thing is running in and I thought I'm not sure that problem has been already addressed but I'm not sure that's the case because we're this is a network service match and the the Kubernetes the way we started talking about this stuff months ago and maybe I'm uh off off base is that the Kubernetes what at least in one way of thinking about it be responsible for uh an IP address space and what we might think of is analogous to the management plane and more conventional networks and that the the network service match will be will be responsible for orchestrating the elements of the of the of the data plane which will perhaps be based on a on a high speed um a higher speed uh a data plane and routing elements that are outside of the Kubernetes network that and this would connect the pods the pods would be able to talk to each other and some of the the g rpcs if the remote might go over the kubernetes network I know here they're saying unix sockets but the actual data plane traffic will not be part of the kubernetes network that's one way of thinking of it and I realized that in the network working group there's a lot of people working on multiple address spaces and multis and various other things that look at the world a different way but I think those were our first principles I because I think there's multiple cases here though because I think you can have the the ships at night overlay case where there's two address spaces and to your case where the management network is kubernetes and then there's our uh vxlan overlay that's completely isolated and for some of these cases that may work the other case is where you need to bridge from the network service mesh into kubernetes at some node so then there has to be some way of at that point yeah then there would have to be a bridge node I would assume that or something and then there's the case where you actually want to have an overlay network but use kubernetes uh kubernetes address space and I think a lot of this is going to depend on what features each of those things want from kubernetes and what visibility and control kubernetes has into the infrastructure yeah from the uh network side I think the only strong requirement that I can think of at this particular point is um kubernetes when you spin up a cluster has two IP ranges one of them is the pod the pod ip range and the second one is the service ip range and so as long as there's as long as there's no collision along those two ranges if you're using an ip network in your network service mesh it can be and can and most likely it depends on ultimately depends on the stn that you're that you're using but it should be seen as an independent as an independent construct from from the kubernetes-based uh uh systems yes the the implementation could create tunnels through uh through kubernetes ip uh network like that that is definitely a a possible implementation but uh it's it's definitely not a definitely not a requirement and is not it's not the typical model that that I think of when when I'm talking about network service meshes so so so project to comment on your um service ip range and pod ip range pod ip range is all it never actually makes it um to the wire outside correct where service ip range is actually reachable from outside and it lives on a wire physical wire outside of the physical compute mode correct that's the typical implementation but it's not it's not actually mandated by kubernetes so kubernetes just has three basic rules it could be flat yeah and inaccessible yeah exactly I mean the service range I mean it could be um internal addresses and you have to do an extra step to expose them either to kind of marry them with the proxy or some some not device or something like that would it be useful to consider if we become a working group to ask for a network service mesh address space it's probably it's probably a long way in the future but it's yeah but you know so so tell me if I'm if I'm completely wrong here um whether it is uh you know at mandated or not if it is a de facto best practice uh or de facto standard that the service ip that the pod ip range never makes it onto the wire and and the service ip range um in most cases is on the wire whether it is a publicly reachable wire or not it is living on the physical wire then the the the outer address space sorry the the address space used for the um for the underlay uh in this verve uh case it must be actually coming from the service ip range um that's and if so and if so then the the john's point about you know whether nsm needs a separate service nsm ip range or not you know that's that's that's that's the next question but but the first one is what I just said yeah so so the way the way that it's that it's set up is said when you do uh when you want to set up a new connection so the new connection would go over unix socket to the network service mesh so it doesn't it doesn't ever uh kind of it doesn't connect to a traditional kubernetes service and in that sense the network service mesh themselves may end up communicating over a service to another network service mesh in order to negotiate a tunnel uh but that's and and you're right and that's where they they you worry about the ip addresses for that tunnel that's the negotiation between the nsm's yeah when the nsm's are communicating they're they're using standard kubernetes primitives so they're basically talking over the pod uh over the pod network to the other to the or nsm um and both of those nsm's have to then negotiate with each other um the capabilities and establish on both sides the uh the tunnel and if you're working within an ip range um like if you're working with an ip tunnel as an example from your from one pod to another pod then those those those ip addresses do become important but uh primarily just so that you don't accidentally create like if you're using the standard 1016 range you don't want to create a 1016 you don't want to create an ip address within that 1016 range but that being said like it is you know even if you were to spin up a pod and say no networking and then you were to uh uh then drop in connections then you could stick something in 10 in a 1016 range so I mean so there there is there there are ways to to make that happen but um yeah in in general it's the the idea is primarily to keep them keep them as separate as possible and so we may we may need to add in some type of of configuration just to say avoid these um like you prefer these ranges or prefer these but uh that would probably that would probably be within one of the uh plugin configs when when it's being set so I don't think we we probably don't at this point need to ask for for a default range we can probably just pick a default range and to to start with because that it'd be an implementation detail of the of the plugin itself and a different plugin that's implements a different ip range could set it out the more the more important part is that if it's important uh if you're doing ip tunneling and it's an important feature the most important part is that we we have a way to to configure that and that's one of the things that we're working on is to make uh is to make the plugins configurable so you can add in whatever whatever needs to whatever it needs to know and then is this is this the scenario that you are that's already you know a work in progress so for node one or for vrf0 port on node one to be able to establish a tunnel to the vrf0 on port on node two because they will have to communicate via msm yes so right now we're that that that involves ip addresses that involves actually you know setting up the tunnel so the the part that I mentioned is like where we're still we're still building up the initial I guess you would say infrastructure platform for such a thing to be built on so we're not working on on actively on that specific plugin at this point but we one of the primitives that it will definitely need is a way to set a configuration for that plugin to basically tell it to there's also another thing as well the nsm itself doesn't really care about what ip addresses you use so one one thing that we've spoken about is letting the connections themselves negotiate a an address and so when you request a a tunnel or when you request a service if it requires an ip address the it's it's possible that the service that's providing that functionality may have the most context and could provide an ip for for downstream to to use and so that's that's one model that we're also that we're also looking at as well is so and that's in that in that in that essence one possible pattern would be to have the service be given a set of ip ranges and then when the clients connect in the service could then hand out ip addresses as it sees fit in order exactly so if the service's job is to provide a tunnel then that service has to know about what ip addresses are available that it negotiates with another service that's the way i think if it's all like a network of services each of which supplies a different layer so this guy right so this guy should actually know that's what you're saying no i think this guy doesn't know i think the guy the network service mesh that that he's talking to says oh he needs a tunnel and then he negotiates the details of what that of what that tunnel is right okay all right Frederick is this described anywhere because they said that you know this is being built so this is currently in the in the github code but are the options um followed anywhere that i can i can read up on or is that well i'll try to be i'll try to be accurate as possible so right now there's no tunnels that are being built yet because we're still building on building up the primitives for plugins so right now what we're doing is we're building a we're building the mechanism that will allow you to build a plugin so that includes logging infrastructure how do you manage how do you add and manage configurations object storage for for those plugins so we're we're trying to so we're we're building out like what does it even mean to be a plugin at this particular point okay all right so early in front okay yeah so once once we get through do that then one of the questions then becomes how do we build a a layer a layer three tunnel how do we build a layer two tunnel what should those what should those look like and one of the things that we're that we're doing is we're trying to build the plugins in such a way that if you if you need to the decision should come from whichever entity has the most context in this space and interestingly enough despite the fact that network service mesh is organizing the negotiation of these plugins that does not necessarily mean that network service mesh itself has the best context so it's possible that the service that is that is being exposed may have more context about the problem and likely will have more context about the problem then then network service mesh itself does that make sense yes and so if that's the scenario that's the case then that decision should be handed to that service and that service could work out okay i need to hand out this particular IP address or i need to set these parameters or or so on and set the set the right basically any initialization of that connection that it would pass that information back to the network service mesh and say yes i am accepting this connection with these parameter requests and the other side accepts as well and so now you have a successful negotiation and then you build up the tunnels and so from so that that's the current that's the current model that we're that we're looking at is is to allow the service to provide that information and the service could say use use this IP address and so then where would you program this IP address or the range of IP addresses it could use it would be at the it would be at the plug-in in that in that area and another interesting thing as well is that all of this stuff is is technically point-to-point and so depending on the tunnel if the tunnel itself is transient and doesn't need to ever be seen then it's possible that that even if multiple systems end up reusing the same IP address during during the during the tunnels it's possible that there'll be no adverse effect depending on how on how they get handed out and and how the negotiation works so just as an interesting side effect but uh yeah in in general like we we the entity that has the most information as to whether this in should be the one that uh that that hands it out so this is a good area for documentation because we are all unsure of what you mean you mean from the concept perspective yes yeah and this is um this is something that we're you know we're so we're we've been discussing patterns and trying to work out like which what direction that we should head and this is one it's not the only pattern but it's it's a pattern that that we think would be good to to go towards it is possible it is possible that someone could implement the plug-in that um where network service mesh actually does hold that information and instead of having the service handle the configuration the network service mesh plug-in itself could could deal with it so it is it is possible to what's it mean look we should look we should lay out the possibilities and I think you know trams help try to match them to use cases because there is a lot of possibilities here but yeah it's fairly critical we get this right because without it nothing works okay come completely agree and and one of the things about it is that this is the system is flexible enough that if like we we want to provide good patterns and for people to follow those patterns um and provide good templates for people to follow but it is flexible enough that if you want to break out of the box it doesn't stop you from breaking out of the box but it should make common use cases very simple very easy to follow yes so that's that's sort of the way that we're looking at it so one one of the things we want to be careful with is if you actually do have a need for something that's that's not in a common pattern uh we we don't want to say um no so we want to be able to say you can try all right I think that clarifies a lot so let me let me think this through and if if I have some thoughts on on that from the specific use case perspective and network mechanics I'll I'll provide next meeting thank you cool yeah and if you have any if you have any concerns that pop up you know let us let us know and one easy way to get a hold of us uh with relative fees is um on IRC if you hop onto the network service mesh channel yeah sure sure I'm there I'm there I'm there so but I'm going to approach it not from orchestration perspective um but from networking perspective so I would like to actually leverage all the hooks we have in the networking specs and functionality that's that should be there as part of the IP stack in the context of this with the VRF v4 and v6 to see to what degree this could be eased and anyway let me think about it thanks cool well yeah thanks and uh definitely looking forward to hearing your your feedback so all right thanks very much and I can show you a photo of July break thank you very much uh goodbye everybody bye take care