 Good morning. Hi, Ed. Hello, everyone. There was some confusion. I was on an airplane and Frederick was ill and yeah. On public holidays. Consequences. Yeah, no, it was the confluence of events. So, sorry. Close the door. Was traveling to somebody thing internally last week and so. Got very, very, very distracted. But I think back to moving things forward again. Cool. Does someone want to. I guess I can share the. Hi. Hey, welcome. Morning, Nikolai. Good morning. I'm just having my first cup of coffee. So I'm a little bit. Still a little bit out of it. I'm a bit jealous because it's almost a night for us and having a coffee on night. It's not very good. No, no, I usually, I, I don't have coffee usually after two p.m. my time. Just a little while it doesn't go well. So, all right, let me see if I can share. Yeah. Hi, Frederick. Morning, Frederick. I was, I was just commenting that. I am just having my first cup of coffee, but it's even earlier for you. So. Yeah. I was going to ask if you can take a, if you could take the meeting today because. I had a two a.m. emergency that required me to drive. Yeah. I was going to look at dropping out for, for today. Yeah. We'll, we'll, we'll figure it definitely figure something out. I'm, like I said, I'm kind of bragging. Cool. So. Going through the, the board. Starting with. Progress. So we've got, um, mechanism, SRV stick should return zero value for parameter instead of error. Um, Oh, it's related to jury PC in a convention about so return parameters. Okay. Could you say more? I'm not quite following. Oh, can you open the, my PR related to wire guard remote mechanism into. Yep. Yep. Uh, let's open the PR. Sure. Yes. And, uh, here, uh, you suggest to, uh, use, uh, jury PC, uh, convention. Oh. I have implemented this for the service is six. Okay. That makes perfect sense. Um, Apologies for being so slow this morning. Yeah. Because that, that, that, that, that does, you know, you know, you know, you know, you know, you know, you know, you know, you know, you know, you know, you know, you know, you know, that's, that, that, that does help a great deal because otherwise. You want to give a very complicated stats. So things. So, um, and, and. If memory serves. Yeah. So, I mean, that that's a fairly common convention. So, okay, that explains the SRV six. Um, And the VX land and the remote mechanism. And then you had some news on the core DNS, the fan out plugin stuff, I know we talked a little bit about this, but it's probably better to talk about it in the broader community a bit. Oh, yes. In short, coordinates guys suggested using forward plugin with zonus handling instead of a final plugin. And this way looks good for us and it also can simplify our NSM specific part of DNS. And I have prepared PR for this. And it's not PASCI yet, but I'll work on this. Yeah, so basically did you still have some use case for using KFN out? Oh, actually, if the user can currently configure his DNS conflicts, in this case, forward plugin will work fine. But we could face some problems with, we can face the problems with recursive servers, but I actually can't find any case where the forward plugin with zonus handling will not work as expected. Yeah, it's effectively this was a suggestion that came out in the discussion with the core DNS folks, and they basically said, look, you know, we we sort of described what we were trying to do what we were trying to do. And he said, hey, you know, you can just, if you've got usually your DNS servers, you can use the zones. So that if I come in with a network service and I say, look, I'm providing DNS for food example.com in the DNS context, then we can simply put a record in for food example.com and do our look up there. And the, you know, this gives us I think about as good as we're going to get. We still have the, the underlying problem that, you know, if people are doing split DNS, then you, they may be representing for the same domain name different IPs internally and externally. But that's going to be a problem no matter what we do. Oh, yes, you're right. I can suggest some solution by slug after the meeting if you don't. Oh, very interesting. Yes. Okay. Cool. And then there was a comment here Denise about cross connect server sending empty statistics for metrics. This was open. Yes. We had a Merged PR related to switch metric service to stats polling service from DPP and looks like the problem is solved. And I asked Ivana to check this. Oh, yeah, I also could confirm by test, but polling services are working fine. Perfect. Good news is Ivana, I don't think we have a lot on the call today. So, but yeah, we'll be able to take a look and see if that's been resolved. My, my general with them is I prefer possible to get confirmation from the person who opened the bug that in fact we have fixed their problem, because I'm sure we've all been there where you did your level best and you thought you fixed it and everybody agreed that you fixed it and you didn't fix it. So, Okay. And then add option to break tests after several test fails. This was a cloud testing. All suggestions and now I'm waiting my reviewers to approve. Cool. Excellent. I am the last person who didn't give me a little bit later on the review. Cool. All right. So, in progress. So this is another one that you don't have to finish pink by hosting cannot be success for chain tests. This is a simple problem, which was found by testing SM suits. I found that some chain can make pink is not success. I have added steps to reproduce. Can you open the issue? Yeah, it looks simple. But as we discussed internally, it's some happening inside the VPP. And pink is not going. Okay. So, I'm not quite following the fact that pink is not going forward through is that how is this related to the host name being by hosting. Oh, it's test related to DNS. I'm just using DNS and tries pink NSE by host name. In the test and see just have some DNS config for this specific name host. And we just trying pink NSE by its host name. And here, we face a problem that with chain tests in suits, it can fail on CI. And I just added issue for this. Okay. So I guess the question is, is the problem with DNS resolution or is the problem with Well, I think it's related to VPP. I just quickly look it into locks and I found that that forward here has very huge locks. And I think we need to investigate this problem. I have attached the locks. So it's mostly all related to switching to seeds for integration tests when we reuse the same forwarder and same network service manager. And we come to issues like this. Okay, okay, that's that's starting to make a bit more sense then. Okay, that makes more sense then. Okay, so you're currently digging into it then. Correct. Yep. Oh, yes. Okay. I'm adding the leak for checking go routines on the CI. This seems like a I've added go leak to chain element test. And it shows that there are leaks in the client and the monitor client. And now I'm trying to figure out how to analyze properly. You can see locks on CI. Yep. Okay, so this is So I think overall this is, you know, checking for for leaks is definitely a good thing. And so you're saying he'll in monitor or leaking. And the expected go routines leak for the tests. So it's possible that what's going on here is that the tests are not properly providing the right close thing because I know that that one of the things that that Ilya did was he switched over to at least heal and I think perhaps also monitor having a chain level context in order to cancel in order to cause the event, the various go routines to quit. So you may want to take a look and see if in fact the tests themselves are are closing that are canceling that context at the end source the leak. Yeah, yeah, it's probably Okay. But no, I think adding go go leaks is a wonderful thing because we're wanting to run long term. We definitely don't want to be leaking go routines. Okay, so the command or service manager application and testing stuff. I saw that going by Mostly about except for he's on PTO unfortunately today and I plan to help him. So the general idea is to have network service manager based on a new SDK chain elements and add to the required challenge elements to SDK. Yep. Nope, I think that's a good idea. I do have a question for guys for you guys and this is this is more than anything. Sort of like trying to sort out whether or not I'm making things too hard. You would look to calling this command that's a network service manager which is a perfectly fine name. I have been thinking about calling it command dash dates that network service manager because it's Kubernetes related. I don't know if that that would be over complicated naming or whether or not it communicate something useful or valuable. So like input on that would be super welcome from my side. Yeah, actually, we can choose any name. If you have some document describing a naming policy for all of his applications. So it's a good time to use a better names. But the network service manager pot contains three pots. Yeah, it's equivalent to current and SMD container. So it's independent for Kubernetes. Yeah, exactly. So it's it's what application independent from a Kubernetes. Okay, so I mean we can, we can definitely sort that out. I know for example but the way I've been thinking about it quite frankly is it actually doesn't make any sense. It turns out that we make our lives incredibly hard and painful by pulling out the network service manager device plug-in into a separate container. That just makes like really hard for no good reason. So when doing this I was thinking of actually just having it exposed the device plug-in piece directly from the network service manager command because there's literally no point in pulling it apart separately. But Ed, I think we discussed it before already. We need a device plug-in only if we want to have a workspace, if a MIMEF and so on, and then point mostly. So probably we could do the same way as a Kubernetes do and have just one single socket for an SM. So I would love to get to something like that. The tricky problem is how do I get the... So on the one side you've got one socket where the network service manager is listening, right? Yeah. On the other side you need to also have a socket where the NSE is listening. Yeah, but for Kubernetes we also need the same way. So we can just create some file inside the folder mounted to both of us and provide it to a NSE manager. So it could connect to NSE with the socket file. Again, how do you get there? The tricky problem you're running into is that if you want to get per pod directories to put the socket file in, then you need to have some way of getting those per pod... Ed, I think we could just have one folder because we have a security right now. It's not a problem with managing connection restrictions to the socket files. So if we have one folder, then effectively one... If you have one folder, then that means that, for example, a rogue pod can mount a denial of service against an NSE that may not be prepared for it, for example, right? Effectively, if you look at the way that all the Kubernetes things are doing this, either A, you're only ever talking northbound, right? So you're only ever having somebody talk to the one socket, for example, when you're talking to Kubernetes API locally, or for the API server, or you are having the situation where... If you look at the resource device, the device plugin stuff, you end up registering a socket going back via the device plugin. So you basically have a registration call where you say, this is my socket that you go and deal with this. So I definitely want to investigate seeing if we can get to this place. I guess the point is, I'm not seeing 100% how we get there yet. Does that make sense? Yeah. I don't think we have any problems here, actually, during our experiments. Yeah, so again, I would like to know how we are getting per pod mounts done so that we don't have pods cross-talking to each other without a device plugin. In our internal discussions, it was about mounting two folders, one for an SM server socket and one shared folders for any of the NSEs to put the NSE client socket inside it. The second one that worries me very badly, because that folder opens up a huge set of potential security issues, right? On these sockets, for NSE, it will be just NSE servers with SSL. Right, but just let me just give you a very, very straightforward example, right? So you are pod one, you lead on a client socket in that folder. I am pod two. I am nefarious pod. I delete your file socket and replace it with my own. Now, you have gone and registered the fact that you actually have this network service you are providing, but I have now sewed myself in to receive the calls to you. Because I can... Okay, okay. If it's allowed to delete, yeah, it could be a problem. I mean, we can potentially look around and see if people have found other good ways of solving this, but the sort of very naive way of saying we'll have one folder for all the clients to have their sockets has a real potential to have issues. I mean, in terms of have issues with security. Because again, you can you can literally go and catch other people's messages and receive other people's calls and prevent people from being able to reach the legitimate network service endpoint there. Yeah, okay. Okay, we will discuss if we'll expand there. Yeah. So, but no, I mean, they're there. I understand exactly what you're trying to get rid of there. It would be great. It's a smarter idea out there, but I don't think it's as simple as just have a separate folder for the clients to drop their sockets and have all the clients... Yeah, we discussed a few variants actually. Probably the most good will be to have endpoints to be served on a TCP socket, any of the TCP socket available on the node. So, and as manager, we connect to using a TCP. It's the safest way, I think. Yeah, again, the one thing I want to think through there is that it actively precludes us doing any kind of CNI intercept, because now we cannot function independent of the CNI. Yeah, okay. So, I mean, I actually actively encourage trying to see if we can figure out something smarter in this direction, because I think that would be wonderful. Yeah, yeah. I want to make sure that we deal with here, because it would actually make me greatly happy to move away to do something quite a bit simpler. So... Yeah, okay. Cool. All right. Yeah, and in fact, actually, one of the things that we can do that's basically very, very cheap along the way for this that I have been doing is in a lot of the places where we have been basically using socket files. I've been using URLs with UNIX URL, because then when we do get to the point where we figured out a way that doesn't involve socket files, we don't have nearly as much recoding to do. Because we've got a mechanism that's generic to it. Cool. Awesome. Let's see. WireGuard remote support for VPP agent forwarder? How is that going, Denis? Here, we face it that VPP is compatible with WireGuard, because of IFPacket works on layer 2. WireGuard works on layer 3. And at this moment, I'm trying to add to IFPacket a possible to work as a tree. And at this moment, I'm not faced with problems yet with this way. And I'll let you know about any updates. That would be awesome, because in a brief glance that I did through, it looked like IFPacket probably would do the right thing if you just, if in fact the IFPacket plugin for VPP was coded to do the right thing. It's just the guys who had done it had only been thinking in terms of the ETH Paris. And so they sort of coded it to do the L2 thing. So it seemed possible, you know, potentially possible, but I'm glad you're digging into it. We're writing up against the edge of the hour. Shall we all go jump on the community meeting? Yep. Alright, talk to you later. Yep. See you.