 Hello, just a reminder that this call is recorded. We'll get started in a few minutes. Does anyone see my shared browser with the agenda? Yep. Yeah, we're able to see it. Thanks. I have also posted the link to the agenda in the meeting minutes in the chat here. So please have everyone at your names here and we'll be starting in a minute or two. Today I don't think we're going to have Ed in today. So I'll have to make do with that. Cool. Let's go ahead and get started. So welcome to the Network Service Smash weekly meeting. So we have this meeting every week at this time on every Tuesday. We have, we also participate on the CNCF Telecom user group meeting, which occurs every first Monday at 8 a.m. Pacific and every third Monday at 4 a.m. Pacific. We also are going to participate in the CNCF Networking Working Group when that resumes. We have a few major events coming. Sorry, Fred, just a quick amendment here. I did participate in the very first Zoom call on the CNCF Networking Working Group last week. It was just a couple of people. It's nothing really was decided, but I believe that Matt sent an email over the mailing list. So whoever is subscribed there, you can see it's essentially a call for just participation and ideas. Perfect. Thanks for letting me know. So I'll go fish the details out later on and stick them in here. So we also have the Open Source Summit, which is happening in a few days. Starting on April 28th and 30th. We have an intro to NSM Talk by Ivana and Radoslav. And I will also be on the CNCF booth I think on Monday. I have to check the schedule, but I'll be there on the CNCF booth. So if someone is around and want to chat, I'll be at both talks of Ivana and Radoslav and also at the booth there. So we also have building the next telco. I believe that's the session with Ivana and Radoslav. In the following month, in November 18th or 21st, we have KubeCon and ConNativeCon. So KubeCon and ConNativeCon, we have NSMCon for people to join. There is now a waitlist for joining. So if you're interested in going, please sign up now. And hopefully we'll be able to slot people in. But as of now, the conference is officially full. We also have a NSM developer. We have a developer talk, excuse me. I'm still getting a little bit of a cold, so I apologize. We also have a developer talk for KubeCon that's coming up. And so it's five cool things you can do with NSM that will have Ed Wernicke, Nikolai, and me. So come feel free to join us and learn about five cool things you can do. We have Fogdom coming out. Fogdom is located in... Actually, where is that located at? Brussels. Yeah, it's Brussels. In Brussels, yeah. And so there has been some interest in having NSM in the SDN room. We are considering, on our side, this invitation that was really nice and good. We just have to figure out the logistics and properly apply it. Yeah, and Fogdom, if I recall, is ran by the community. So there's a very wide range of interesting talks that are there of various levels. But it's a very interesting conference. So definitely recommend going if you've never gone to Fogdom before. We have CloudNative, we have QCon and CloudNativeCon Europe that has been announced. And they are now starting to accept proposals for Amsterdam. And that will be between March 30th through April 2nd. So just as a reminder, the proposals close on December 4th. So make sure you get your proposals in ASAP. So this is usually a bit earlier than previous years. And that's, I believe, because the conference is earlier in the year. In fact, we have the Open Networking Edge Summit as well, which is going to occur in Los Angeles. And so the event URL will be determined soon. So we'll hear more about that later. Do we have Lucine on the call? Taylor said that they have a conflict, I guess that we don't see her. Okay, not a problem. So we have the social media team, and you can see that they give us an update. We did an update for the 22nd. Cool, there it is. We have eight additional followers now, which brings us to 489. We're following 31 more people, bringing us to 1980. And we have performed 637 overall tweets, which was 30 out of this week. I was posted that NSMCon is sold out. Telecom TV video. So the Telecom TV video has been shared. So that has snippets from a variety of people, including me and Taylor. We also have reshared the CNTF NSM webinar and promoted the session at OSSUU. And we have 12 individual session spotlights for NSMCon, which went out. So the plan is to continue to promote KubeCon and tweet when the Twitter account reaches 500 followers and tweet when the GitHub reaches 300 stars. There's also contributors podcasters we just recently did this last week. And so once that has been posted, we will share it with everyone. And so what's interesting with that is it's focused not necessarily on the projects, but on the contributors and how do we grow a community and so on. So a bit of an interesting podcast. We also have doing them access to the LinkedIn account. And so there's not very many updates very yet other than we've posted some updates. And the engagement rate is 10%, just over 10%. So we'll see how those numbers change over time. And we've also posted links to the webinar or there was also information that was posted on the webinar and so on. Cool. So with that, let's go ahead and move down. Actually, so right now we don't have anything listed on the agenda. Is there anything that anyone wants to, anyone wants to discuss? I have one thing on my mind, but I also know that Henry from Ericsson is here. And we discussed the other day that he might want to come up with something like a use case or something. So Henry, do you have something that you can share with us? Yeah, sure. Actually, I included the Roshini who has more of the details and could present that use case. So I guess, I mean, the idea with this was, I mean, we have been looking into NSM. And of course, this is all in the context of the telecom user group. But then we discussed and the idea was that we bring, you know, first into this community just to make sure that, you know, it's, you know, the use case from that point of view looks sane. And then we will actually move it into the testbed within the telecom user group as well. So this was just a pre-check with NSM community down here to make sure that we are continuing in the right direction. Okay, do you have something to share, like in terms of, do you want me to stop sharing so that you can grab the screen? Yeah, Roshini, can you share and present the use case? Okay. Can you hear me now? Yeah, we hear you. Okay. Okay, good. Yeah, I can share. Can you see my screen now? Yeah. So, yeah, I mean, we were working on some high-level use case which basically covers the end-to-end connectivity. So this is the scope. So basically, we want to show end-to-end external connectivity and VPN separation by dedicating specific interface to the application board. So this red dish is the application board. And we are not showing any Kubernetes default networking here. Instead, we are mainly focusing only on the NSM in this picture. So the main components that we assume to have is like a distributed bridge domain network service that will establish point-to-point communication to the application force using bridge. And we expect like a layer 3 forwarder network service endpoint that can, that also runs a support, can connect BIP address to the internal load balancer. For example, BPP in the network service, which basically will load balance the incoming traffic to respective L2 bridge. So basically, I mean, the bridge domain network service we have tried out and with help of Nikolai, we have done some changes and we did pull request. So that's available. But then we are also looking at something like the distributed bridge domain network service that will be like scattered over the different class, I mean different work nodes. And basically, and then we will also are interested in layer 3 forward network service that will expose the BIP. And this shows like a gateway router will be like configured with ECMP. So whenever a traffic with a specific BIP reaches gateway router, the gateway router will ECMP. The traffic with one of the node where this L3 forwarder is running. So the CNCF test but could use any software router to simulate this gateway router functionality. And our wish list or the expected final case is to have NONAT, which is not currently supported, but the expected... NONAT is supported with L3. Yeah, I mean, we need to have multiple addresses. I can show the other. Okay, yeah. So this is the basic use case that we would like to see in CNF test. But should we show the next slide or do you have any questions or comments? And do you think that this is this makes sense? This is a use case. So from, from an SM point of view, this makes sense. As you said, you already started working with the examples and, you know, keeping this layer 2 bridge, you know, CNF implemented. So yes, this definitely we can start actually working with you or kind of, I mean, if you need any help and support on bringing this to... I would suggest to try to have something in the examples report that can then easily be migrated to the CNF testbed. But if this would make sense for the CNF testbed, this will mean we'll have to be discussed and synced with the people from the CNF testbed. And unfortunately today, they have a conflict. So Taylor, who's seen no one from the team is there. So probably this needs to be brought up with them also on Thursday. But I think that that they will not suppose to have as many use cases as as make sense to people, especially that we have some practical background here. I mean, it's not just like completely made out of thin air. Yeah, I mean, for us, for as a community, I mean, sometimes it's hard because, you know, I'm more working in examples. Figuring out, you know, simple ICMP responder. Okay, it's kind of a proof of something, but I mean, this makes a lot more sense. So yeah, okay. But then I think also, I mean, if we can start to also then provide the similar thing into the kind of format that they expected in the testbed. And of course, then it can be further discussed there as well, I guess. Yes, my experience so far with the CNF testbed, because I kind of prepared the two examples that are out there, I am implementing something that's not touching the hardware at all. But in a way that they can just take it and then apply it with their, of course, if you're willing to contribute directly and straight directly to the CNF testbed that's of course not a problem. I would prefer if we have something that we can replicate in the example but if you think that this is not suitable or you don't want to do it. There were reasons I personally don't have any problems with that. I think the example is good as well. It shows the use of NSM directly, I think. Okay. Just to add from a CNF testbed perspective, I think this is a good enough use case. But yeah, I think it's definitely something we can look into. Yeah, in the short term, one of the things that we have spoken about before was the ECMP and the router on top. And so in the short term, I think it's okay to simplify it and hard code it. In the long run, it'd be interesting to get a NSM to stick something in front of it so that that way we can make a request and receive connection parameters to it on demand. So, but no, to me, this looks like a really good use case. This is actually very close to. There's a specific use case that people have been wanting in Kubernetes for a while. There's not too many implementations of it where you have ECMP control a set of gateway routers with a floating IP that's banned across multiple routers. You can have multiple paths. And so this is actually the first step towards achieving that as well is approaching this particular path. And so I think this is an excellent first step and thank you for bringing this up. Then I have a toy service that I'm playing with. And I have some question about it. If I could share that, then I could get some some of your views on the direction of how to actually solve this because it's part of this as well. Well, we do have time. So let's go ahead and bring it up. I can just talk about it and share. And how do I do share? This one. Share screen. So this is a service that I actually have running. But I get into some, because when I want to automate this, I think that the address as you see this is LBVP. Now. Do everybody see the picture? Yeah. The LBVP. I can set that in the manifest of the service. So actually in the service here, I have all the information I need. But what I also need to do is to in the client side, I would like to create the GRE tunnel devices with the VIP. And I would like to set the route. Probably I would like to do a source routing to the anything that has a bit better source should go back to this domain. And how I would do that. I'm not really sure that make my own in it container that actually and can I tag some extra data in the protocol from the to the end that I can receive in the in the container of the client. So to be clear is what do you want direct server server response. Yeah. Because I don't get that in that thing as well. And this is really there's nothing else that I see that can that can do no net in the in the Kubernetes as well. This thing. Yeah. So direct server response is a little bit more of an interesting use case. So within NSM, there should not be any problems with what performing the direct server response itself because what you want is you want your payload in this scenario. Instead of IP you want you'll want to have something that terminates something that allows you to have the the tag that you that way you can know where the where the where the responses or another option might need to have the source the source destination preserved. Yeah, it works already is just that I need to do the I need to add the GRE tunnel from the client by hand. Otherwise I mean this setup I used the LBO VPP and it put the GRE tunnel. Yeah, I will show them. I cannot point. Yeah. Yeah. That makes sense. So in the bridge and those are like going straight to the client through NSM. So if I create the GRE tunnel on the client by hand and I put the route saying that if the source address is a bit better before I put the routing table and the fault route that back to the NSM zero. Then this setup works fine. I mean, and I can even test it from another container since the open VPN server is going through the Kubernetes network service. This makes sense. And so, yeah, I think would. So we can put arbitrary. I guess you would say. It's just as part of the changes. There's a couple of ways we can do this, but one, one potential way is, we could, we could build a small network service that sits in front of the that sits in front of the application that is going to read off those particular labels and tags and then knows, and then knows how to mangle the packet properly or package treated properly so that when it returns, it's it builds the packet in the way that you expect and then to the to the correct. To the correct estimation from there. So it's, it's very similar to what open shifted so open shift has a proxy that sits in front of each in front of each pod. And so when they would receive traffic, they were able to treat it there so it's a similar it's a similar path. But yeah, no, this is this is definitely, this is definitely possible and I think we I think we'll have to build a one option is we may have to build a network service that can they can perform the packet treatment on the on the header in order to in order to mangle it properly, but I this this should be a very easy. This should be a very easy path. But when you say adding adding extra information in this connect request that comes to the ribs remain, you know, where you have the IP context and all this kind of stuff is that can I add data to that buffer to the that I get that get transferred back to the to the client. That's possible as well so when you do the initial connection. You could pass the label information right there then you don't have to worry about additional labels or or encapsulations or anything like that. That's definitely that's definitely a way to do it as well. But I'm looking at this there is a context there's something that the product or something that is in the in the source code. Is it just to add to that one or is there some way of getting in the extra data in response. Yeah, we can add is the we can add into that context and so when you when you do the initial connection, and you build it and you create the interface, part of the part of the request and response is you can add arbitrary labels. And so, in this scenario, our point of our plans in the long run is we want to try to standardize some of the labels. That way we're not. We don't end up with a for an explosion of a random label which makes people incompatible, but in the in the in the short term, don't worry about that. In the short term, like, you know, feel free to create a label and stick it in there. Cool, I will do that because I thought they were like in the other direction that you got labels from the client, but I can add labels on the on the response as well. Yeah, that's that's correct. You can have labels on the on the response. Yeah. And if you have trouble with it or if it doesn't work for some reason, let us know and we'll fix it. Legals. So then I make my own in the container and I will not use the web but mission hook to because then I can create interfaces in that one. There's a great job. Exactly. And so you just create your own client and move it from there. There's there's two other things about this though. So one of them is if you're, if you're using a kernel interface as your client, and the kernel interface will likely may or may not have access to to change the Yeah, so you need privileges in order to in order to do that. And that's why that's why I was suggesting that you could have a short, a small network service endpoint there that that's part of the chain that is able to do that packet treatment on its behalf without the privileges. But if you're using something like Memphis, or then you can you can set it yourself, like you have the ability to craft your own packet. But if you're using kernel interfaces, you may need one extra component in there in order to avoid giving that pop. I see what you're saying, could but could I run an in the container as privileged or with that like poison the complete container. You cannot. It's all or nothing if you're running in a container as privileged or all privilege. So that's why I was suggesting that we can, we can inject something in the middle and that that is able to perform that packet treatment. And that would be the recommended way of at this point. Yeah, I will start to try to do a privileged mode client to just to see if it works because it is cool to work. And then we can think about. Yeah, I will get the extra jump with the memory and the current again. If I have you can, you can start it with privilege as well and then work your way down. And so if you want to start off a privilege, that's okay. But just makes just make sure that you know, we'll have to make sure there's a path to to unprivileged in the long run because you don't want a bunch of containers out there with with redacted. Yeah, of course, but this is more or less, it's a toy. I mean, something to use that. But it's cool because it shows we can run no net load balancing. I mean, I think this I read about the load balance load balancing in VPP and it's kind of cool since it's hashing load balancer. So we can run them in parallel without connection without session saving and synchronization and stuff like that. Yeah, exactly. So yeah, so let's go give it a give it a try. And yeah, so to start off the privilege, and then we can we can help you once once that's working. We could help you bring down the privileges. Depending on the on the coming on implementation. Yeah, I think that that would be good because the picture of finish old is probably where we where we will. That's where we will go. But I think we need so to solve these problems before we can do anything with that. Perfect. So yeah, if you need any help, please join us on on flash. You can also come here and ask questions and we'll try to help for these for these type of things. My strong recommendation is to ask questions on slack because we're able to spend more time answering the questions question properly on on the details. So hi. Excellent. Do you have any other questions or I don't have them. So yesterday, Thomas actually mentioned something about 5G or something. Was he referring to this use case or is there something else going on? No, no, I think it was one of these uses to still, you know, have this proper then VPN separation on some of the protocols that will go into the 5G. So it was from that point of view. Cool. What quick question was that second use case is your intention to eventually put that on the scene as well? You mean the one with the open VPN tunnel or Yeah, that I can put an example. Because it's cool because you can run it without having all the networking set up. I mean, you can just run it with the normal Kubernetes up and tunneling through Kubernetes. So I don't know if that's for scene after that. But for example, maybe we can do it. Yeah, as an example is fantastic as well. We would more than happy to have that as an example. Let us know if you need any assistance, any changes in APIs, infrastructure, whatever. I mean, it's all about the users in the end. We can do whatever we want, but it's most important for people like you. So thank you for doing this. Okay. So I have some something that I want to just bring up. Maybe we'll finish probably next week about the forthcoming release that we are planning. I wish that it was here. I know that he's going to listen through this letter. So hello it. So if there's anything else that some of the people want to bring up. I know that I mean, it would have been good if we have added here, but someone wants to try here. Maybe not. Anyone wants to bring something. Okay, then I will go quickly through what I have here. So yeah, we did our last release like a couple of months ago, whatever. I hope this time we will do better. I think that we are in a much better situation over a little bit better. What you want to do. Also, we have pretty good chance already, which allows us to actually get good enough release. So it's going to be 0.2. I was hoping to label this beta release. I don't know if people will agree with that. Do we think that it's already in this stage, the project. I think it's kind of main. Maybe we have somewhere of edges here and there, but I would consider it a bit. Somehow I mean alpha sounds a lot more scarier. If people come up with use cases already, then probably we deserve to label us a bit. I don't know Andre, Fred, someone else, how do you feel about that? Sorry, it cut off for me in that last part of that last section. Can I repeat the question one more time? Yeah, I was saying, do people feel that we can call this beta version beta? Yeah, I'm happy with calling it a beta. I mean, we have a lot of really amazing features that have been packed in that they're not hardened yet, but we're getting closer to hitting the set of features that we want to deliver on the first major version. So we have things like spiffy and spire. We have the open policy and stuff that's been going on, you know, and all necessary stuff in order to properly. Yeah, the inter domain stuff is part of it. So we have a lot of really amazing, amazing things and it's starting to, you know, we're starting to get it to settle on a set. On a set of features. And we're talking about, okay, how do we harden features much more than how do we add new features as we were even just a few months ago. So I think we're, I think a beta is a good name for it. Andre? Okay, then, then we probably will have to figure out the name. Last time it was a constellation quote. What was the name? Andromeda. Andromeda, yeah. Okay, maybe we have to figure out, not maybe, but we have to figure out the constellation would be. Maybe we'll just create a poll for this. We have to figure out a couple of names. That's not that important. There is a constellation called Borealis Corona. And what's interesting about it is the Borealis Corona is the crown of very admin. Oh, okay. So that's my recommendation. I'm happy with others too. It's a bit of a low mouthful. Yeah, the crown. Yeah, sounds good. Then we have, I actually, I have a specific proposal for when do we want to branch freeze somehow that we can, you know, have a couple of weeks before KubeCon. So I was hoping that not the next Monday, but the Monday after like 4th of November. That's a bit aggressive, maybe. We can figure out in the following two weeks what we, or maybe fifth, I would say fifth, because it's on the world group call. Yeah, we have two weeks. So if we can figure out what are the standing problems that we have to stabilize and fix, maybe we can do it. Of course, nothing is fixed. We just have to figure out what we want to do. So how does folks feel about this? Is it a bit aggressive? Yeah, I'm happy with that particular path. I think we should ask about how he feels because he's been doing a lot of the refactoring. We should ask him how he feels about November 5th as a branch freeze date. Yeah, we also have two options. We can just say like from now on, like from today from this, from this second, we can just stop being adventurous in merging things that we're not really sure about and just start merging stabilizations. We know that JWT, there's something about security to come, and then just continue like in the master, just trying to stabilize, stabilize, stabilize, and then like a week before KubeCon, we just, you know, branch release, because we will know that everything is stable. If we just do stabilizations for the next month, I guess that would be good to go. Yeah, the other option is to just branch and try to stabilize there and continue playing in master, but that's something that we need to figure out internally. So if someone has any opinions here, any suggestions? Nikolai, at least I think we should try to switch from the local and remote to unify the APIs. Oh, it wasn't that. Yeah, at the moment we have adapters, so we're trying with small steps to do replacement of the old code with new one. So I think this kind of changes should be approved in any case. Okay, you will be small enough to just review very easy. Yeah, yeah, yeah, of course, that makes total sense. Okay, so who is working on that? You or? Yeah, yeah, I'm working on it. Okay. Then you can just put labels on the PRs with 0.2. We have a label already. Okay, okay. Just put them and make sure to prioritize this. Great. And then before the previous release, I took the time to rename the API to alpha. Now, if we decide that it's going to be a bit, maybe we can just change our API to beta. Maybe we'll stay for a little bit longer on this beta version of the API, but I don't think that we have some something scheduled to change that much to call it as unstable as an alpha. So these are more or less my thoughts around this release. Cool. So I think that we probably need to have a conversation with that as well and ask him his recommendation. My, my recommendation at this point would be to make the assumption that do we do you want to set November 5th as the date or for the branch and once we rally around with Ed, we can, we can finalize it, but just to give people some, some, some goals to work with at the moment. So how do you feel about that, Nikolai? I'm sorry, I missed the last part. No, I'll rephrase. So should we set November 5th as the tentative date at the moment? Yeah. And once we get the comfort level of what Ed is, because he was working on the refactor to see if he thinks it'll be all wrapped up by events, then we can solidify that just to give people a goal to, to wait for at the moment. And then we can, we can promise not to move it forward, but not promise not to move it back. Yeah. Yeah, makes sense. Cool. So let's say November 5th is the tentative branch date. So we will, we'll finalize that thing. Okay. Cool. Is there anything else that we want to discuss? Andre, on your side, something? No, no. Last week, I think we merged SDK to local and remote separation. Yeah. So it's chain it, network service manager internally now. And mostly that's all from my side. So we updated EPP agent to 2.2 or 2.3 and updated the Kubernetes to latest. Yeah. This means that you have to use the latest health or 2.15 release in order for it to work. Other than that. That's good to know. It just is a recommendation. If we need a specific version of helm, we should make sure that we encode that in somewhere. Although it's a bit hard because we tell people to do home install. And I don't think we have a way to enforce that. Yeah. I mean, you cannot, you cannot install. Like, helm on New York Kubernetes in any case. I mean, you cannot do helm in it. It just doesn't work on 116. One last question as well. Does anyone have access to the AWS infrastructure? Yeah, I think I do. Let's take this offline. Cool. With that, is there anything else anyone wants to bring up or shall we close it out? Okay. So we'll go and close it up. First, I wanted to thank the Ericsson people for bringing their use cases and discussing it here. So that's, that was really helpful. Thank you. Thank you. Yeah. And I circle back with Taylor and let him know that we cover it all. So he's partially informed and excited. And Taylor is the person who is the lead for the CNF testbed. And with that, we will, we will have the, we will have next week's meeting at the same time, same place. And thank you everyone for attending and we'll see you all again next week. Have a good day. Thank you. Thank you. Bye. Bye.