 Hi Frederic, how are you? I'm doing well and how are you doing? Good morning. I do apologize. I just want to let you guys know I'm being summoned to an internal meeting I can't dodge. Related to all the various incendiary goings on. So I will have to duck out here and for the next hour as well. No worries. Are we still on for our nine o'clock? We are. Cool. Excellent. I guess it will just be the four of us. Cool. So what do we want to talk about? Other team members also joined. I think we can discuss the current progress. What's going on on our side at least. Sure. Please, could you start sharing details about the DNS changes and about the wire guard stuff you're doing? Yes, sure. By DNS I have provided PR for improvements and send the NAS part. For example, I have started to use our external plugin for now from separate repo. Also, I have removed an SM coordinates image from Mono repo. And also I have implemented coordinates suggestion to use forward plugin by default and use on out plugin for conflict resolution. And PR has been passed all tests. So please take a look. I can share my screen for project board. Yeah, let's do so. Do you see my screen? Yeah, we see your screen. Nice. Here is PR. This DNS improvements. Oh, sorry. It is another PR. DNS notification. I mean this PR. You can. Please remove the participants window. The participants for the zoom. Okay, sure. Okay, thank you. And also in two words, at the moment we will use both the fan out and the order. If we don't, yeah, if you don't have cross configurations for the DNS. Yes, it is correct. And it should work more better than if we will use only for now plugin because of for now plugin doing upstream requests in parallel and in most cases it is not needed. Also, I have worked with wire guard. Moving into VPP. I did simple steps. For example, I have created VPP plugin. Hello world. And also, I have started looking to wire guard Linux implementation. That's all from my side. Okay, sounds good. So we few steps, I suppose, in the direction to have a wire guard into the VPP. I hope the variant you're looking at the moment will work. Well, Alexander is not joining today, but he's working on a new version of the next manager based on a new SDKs. He in sync with me into this direction. We doing some diagramming to not miss the stuff. We want to have in just a new manager. At the moment, prototyping the direction of streaming and writing a spec to have a single socket for when this manager so we can use just one server socket connection. Without need for a workspace to do just a client request logic and for the most of the just endpoints without the Mimif. I already have a prototype on pure GPC servers. So the idea is, at the moment, every endpoint has a GPC on a client socket. And instead of having additional mounted volume. The idea is to use a beginner just one bidirectional stream of the GPC to pass all the just pure GPC calls back to the client. And according to just implementing the GPC client stream. Client interfaces and stream interfaces and just forward all the packets using just one GPC stream. All seems working. Planted to provide a poor request to review and the spec later today or early in the morning tomorrow. So we can probably discuss it and check if you guys if this approach will work. In this case, we could dramatically simplify when as managers, because we will know no need for a device plugin. And we sit up for the clients and endpoints will be much easier and clean because just one volume with just one server socket will be required. Yeah, actually, I think I would not find any solution with similar to what they wanted in this area. It looks nobody was scared about this. But if you think that the solution is clean and is useful to others, then we should consider the long run, perhaps publishing some information about it for others to use. Yeah, yeah, yeah. So at the moment, actually, it will look very clean because it's like just one go model with a protobooth file. And as before, you will create a two GPC servers, but the series call network activities only on the server instance of APIs. So if you want to call a client, you just will need to using a callback. So API, just call one function and it will forward all the GPC calls back to client. So from the go and the code point of view, it looks very nice actually. I can, it's not not compelling at the moment, but I can show how it actually looks like. So you have some hint. Do you see my screen? Yes. Okay, so what they're doing in this test, I have a server for server. I just register a callback service callback service. It's just one service with one method with bidirectional streams request and response for requests have some options like passing arguments and reply and type of requests. And on the client side, I do a client GPC server for test I register a network service on it. And I do just one call callback serve and server will be able to call for client. So how it looks for a server. Server do just a new client with identifier, identifier could be a specific ID or pasted authority. And it's a looks like an absolutely normal client connection interface. Thanks to the latest version of the GPC for having to interfaces. It make it possible actually before it was not possible. And I'm just do a regular network service client and do call. So from a code point of view, it looks absolutely as I've calling just a pure normal GPC server. But the underline, it will be packaged to this request stuff with invoke method, or if I would like to create a new stream to has a request model new stream and send the messages to and from. Most of my tests works the same way fast. So, at least for our scenarios for an SM and should work. I think pretty cool. Yeah, that makes a lot of sense and so the only challenge we may run into, and it's easy to solve is as people want to use more languages with an SM like they want to bring in. Let's say they have someone who does something in, in C or for rest or Python or so on. We may want to, we may in the long run wants to write some of this stuff out just to make it very easy for them to to ingest news. But yeah, it will be a problem. Yeah, you're right. That's okay, but it's it's it looks, it looks easy enough that someone could write out because at the end of that you're still talking over pure GRPC. Yeah, yeah, yeah. So it's just a matter of creating the pattern for them. And it may even be possible. So on client side, I've just reading the messages on a GRPC stream and do invoke actually underline at the moment for tests, I've tested both. I've creating a real client GRPC socket on a file or on a TCP. For tests, I do it creating socket on your PC on a TCP to not clean the files in the file system during the back and so I just forward to real your PC server. So probably it could be. I don't know, actually, to create this kind of forwarders it's not so complicated actually should just implement all of these methods, not so many of them. If you will have clients on different languages probably alternative is to have find proposal to have something similar to this one. It's like if any see the register on this manager and then do the same way, but having this request and response just in our network service manager APIs, but I'm not sure if it's make it more clean. It would be easier to implement for different languages, of course, but not so cool looking code and ease is to use. The previous approach that you showed me and, you know, it's, if I think in the if this is the direction we end up going with in the long run, you know, we can we can work with the community to just put out the right set of libraries so that it becomes easy for them to use the directional stuff. And it'll definitely be useful for others because I know that it's one of the main problems people have with your PC is that there's no real bidirectional abilities. No, it actually banked into your PC, but it's just a stream. It's not. Yeah, exactly. It's not. It's easy to use. But with API interfaces, it's possible to use because I think I don't remember exactly when this was introduced, but not so far away, I think, maybe two or three months ago. And before it was just a client connection without this interface stuff. What else? I think we still have some stuff in a gold leak area. And we're still working on some issues in the monorepo understand why sometimes pink is not succeed with these sweets approach. What's all from our side. Frederick, you have some areas we can focus. Yeah, so the things that are coming down the pipeline that that I personally care about is going to be focused in there's some some of them was was on trying to get wire guard support in VPP. The second one that I was looking at is trying to try to get some more advanced examples for open policy agent and spiffy interaction. And others given I was looking at what's available. We're able to make a decision based on the last token in the in the chain of last element. If I if I read it properly. And one of the things that we're going to be able to be able to do is to run policy that that traverses the whole, the whole chain for for all spiffy IDs that are in a in a given connection. But I think the right way to approach it is we come up with a set of scenarios for the things that we expect to see and things that we expect there to be policy on that that fits into open policy and we can use that to to guide to the next set of changes in in open policy agent and spiffy interaction. Does that make sense. Yeah. So, for me that that the spiffy and open policy agent ones are are the highest or probably the highest priority for for the use cases that I'm saying. So, yeah, and I'm going to see about setting up a call between you and me privately so we can go over some of this stuff in detail. So sometimes in a day or two, just just as a heads up. Well, we'll go over all this, we'll go over all this stuff. Yep. Okay. Hey, Nikolai. I miss you, I think. Hey. Did you join in? Anyone has something to say. We'll meet in 10 minutes in a community meeting. If we don't have any other main topics, let's go ahead and take a short break. Yep. See you in 10 minutes. See you shortly. Thanks. See you.