 Привет, ребята. А мы туда пришли. Отхороший вопрос. Это же жедмиха, на гитхайде лежит. Она подождем. О! О! О, окей. Нет-нет-нет здесь. У нас есть люди, которые joining the other call. Может быть, они не заметили, что эрелл меняет. Привет, ребята. Прошу, что вы были человеком. О, окей. Мы хотим выиграть проект, а мы взяли. Можно мы просто спать 1 минуту или 2 минуты, перед тем, как мы уже ухватся, как много старых прессов, как, как, знаешь, старые, чем 2 месяца? Возьмем. Мы ухватываем, что... Я думаю, что есть люди, которые уже близки. Но есть люди, которые просто сидят там, и, наверное, они уже невероятны. Просто, как reminder, не реально, это что-то, что нам нужно дельфить, но, вы делаете что-то? Потому что я не... Нет-нет-нет. Да, привет. Вы говорили, что мы хотим делать что-то, перед тем, что мы дельфим. О, окей. Да, я имею в виду. Да, просто... just we have a couple of PRs, since back from September or even earlier, I think that they don't make... I'm unprepared. Should I share? Go ahead. So, this one. PRs. So, if we look at this one here, so this is everything that predates. So, there is a thing from Araluslav that sits there forever. It's about IPv6 testing environment. Is this still something that we want? Or... I just can't remember. It's quite a thing. I mean, it's a... Yeah, I mean, the question, I guess, is... I suspect that something actually got done here. Or maybe it has just gone stale. No, it's gone stale, but the thing is, is it something that we still feel that we want? I would love to see IPv6 testing. I think it's something that we still feel that we want. I would love to see IPv6 testing. I think it's something that we still feel that we want. I would love to see IPv6 testing. I think it's going to be important to us. Because a lot of folks are going to have IPv6 stuff going on. Yeah, but I mean, things changed because today actually you can't have both IPv4 and IPv6 in a cluster. So, you know, the world is moving since April. So, I think that we need to somehow reconsider what actually we want here in this new environment. That's a better description of it. Okay. Then we have this thing about the device plugin. Which I somehow suspect that is still a thing in terms of we want to get rid of it, but I'm not sure if it's still... I mean, it doesn't indicate any problems, conflicts, but... It's not complete, I think. Yeah, yeah, it is not. Denis, I think you are the last one who worked with this. Do you remember when you stopped it? 9th of August. Oh, yes. We found some problems with that. We need to implement one gRPC server for... for NSMDA. Yeah, yeah. I remember a bit. So few important steps need to be happened before we can do this change. First one, right now for every client we have a separate gRPC socket. So I think we need to have just unified one, same way as the Kubernetes has. And for this we need a proper client identification to be implemented. And also this involves workspace management. We have right now using a device plugin. So we need to somehow solve this issue with assignment of separate subfolders for the main myth and for the endpoint unique socket to be independent one from another. So until these problems will be solved I'm not sure if it will be easy to get rid of a device plugin. Yeah, I think at the end of the day effectively we need to effectively, I think we will have an easier time getting rid of the device plugin stuff on the in the new repose because they're actually properly modular. There would be major surgery to get rid of it here and it would be relatively minor surgery to work in that direction in the new repose. Yeah, I think so. Shall we just drop this PR because I think so. It's a lot of outdated PR so I think we just need to close all of them. Okay. I think that it deserves some description why we close it, but okay, fine. I think you're absolutely correct. It deserves a description. At least a sentence. I mean, it's stayed for half a year here. Okay. Then we have possible store described. Okay. I don't think that we should go through each and one of them I try to send to kind of send some messages here as a pink like, okay, we still need this one. So, but, but please just yeah, check whatever is hanging there for it's old enough so that we don't like anything I would consider that anything before November. So everything here is somehow questionable Okay, saying this. I think that we should move to unless folks have to say something else on this topic. Yep. Okay. I think that's fine. Do you want to go ahead and share the project board since you're already sharing? Yeah. Awesome. Okay. So what do we see here? Usually my experience is it's easier to go left or right, because that way, as you discover you're moving things to the left you don't wind up with issues. You go right to left, you look at the thing and you're like, oh, that's in progress. You move to the in progress column. Then when you go to the in progress column, you're looking at it again. Right. Right to left. Exactly. Look at any of the dumb things first and then look at the in progress things and so forth. Yeah. So, by the way, Denise, you're doing a great job picking up a lot of the small things that you do comments like this one here on the add dial client op client dial options to connect server. Right. There was a to do comment in there. It turns out not to be super difficult and this this literally lets us get rid of a whole bunch of our we have an insane amount of dial machinery currently. This literally lets us get away from that because we we have gRPC dial options for any of the things that not that we might want to turn and because we can pass that in to the constructor for the connect server. Now we don't have to go use our own bespoke dial libraries. We can just use the gRPC ones, which is cool. Also other things that were really cool that landed a survey six landing made me happy. The DNS stuff made me very happy as well. So those are all good things. So are we tracking here only the new repos or not really? Mostly. Mostly. I mean a few other things are being added here as well. And part of it was this because this project board spans multiple repos you needed it so you could track things that span multiple repos. So and this is the direct MIMI-F stuff that Denise is working on, which looks like it's getting very close. It's a draft somehow, yeah. But I mean it turns out it's kind of cool because you can write like a single little chain element at the end that just before you commit things says, hey, do I have two interfaces in my config? Are they both MIMI-F? And if that's true, you're like, I shouldn't be sending this to VBB. I should be I have stuff back and forth. Great. Okay. Review in progress in progress. Moder-core chain Metrics. Okay. I don't know if Ivana is on the call. I know that there has been something Is there implementation here? I got the impression that there was a PR about it. Yeah, it's in the if I'm not wrong, it's the same PR that is in the API repo. Okay, but then it should refer to the... By the way, Ivana, I'm really liking the idea you're pushing of reporting metrics directly up into Prometheus. I think that ends up being an amazing thing. And one of the things that occurred to me is because we're pulling metrics along the path, you could imagine a client being able to report metrics to Prometheus, and the client, because it's getting the metrics from the path is now able to see the metrics for its entire path. Yeah, actually the current Prometheus integration is not updated to support path. But what I think maybe we need... I don't think if we are going to have the new API soon, it doesn't make many sense to update the integration with the path and then add the whole refactoring and doing from again. So maybe we need to directly implement that after having the new API. Yeah. Yeah, I mean, effectively all you have to do is implement a chain element that they grab that stuff and it gets to be kind of interesting how you label the Prometheus buckets. That's an interesting question. At the moment it's just by pod names and namespaces. And I think we may add path segments, for example, in order to distinguish which path is the metric part of and what I proposed on Slack, I think I didn't write about it in the issues. But after having that, we can implement some collector that if you point to two points from the path, it can collect the whole metrics in those segments and expose just by giving to endpoints, just to vertices of the graph it should collect the whole metrics from all path segments and expose them. Yep. I've been thinking in terms of the metrics themselves on a particular path segment and please note, this is not super well thought out, but it seems to me that you basically have TX, you know, transmit, receive and and drop are probably the metrics, right? Because coming in incoming you have receive outgoing you have transmit and then drop tells you how many of the things you receive that didn't get passed on to the next path segment. So you can tell the difference between this node dropped something versus I lost something in a wire. At the moment we have them on the old way without path we have the error we actually we have error packets I think it's the same. Okay. Yeah, yeah, I mean, so you know, but I think this is going to end up being crazy cool because you'll literally be able to allow any sort of participant here to be able to publish the metrics for anything downstream of them. I'm actually I'm preparing something from the next call from the community call today with different tools we can stop on one that is good for visualizing once we have everything in Prometheus so we can have a good observable way to see the network to Polish and see the packets going on between different segments and etc. We can discuss that in more details in the community meeting later maybe this is separate this is just for the visualization itself No, I think this is great and it is probably for the community meeting though because we've only got 11 minutes before we got a good jump in the community call so and it's a different WebEx than this so but no I'm super excited about this stuff but cool Okay, what else do we want to point here? I think good stuff is happening Radislav you probably saw do we have Radislav on the call? We don't so there are some so we do have Zemmick though Zemmick just landed mockable mockable kernel stuff basically mockable netlink pieces in the SDK kernel repo so hopefully things will go relatively cleanly from there Is there anything you want to say about all that stuff Zemmick? Hi, it was part of the initial I'm sorry, POCR and I decided to take part of it and push against SDK kernel repository Beyond that I created a new issue in the SDK kernel repository to start moving other pieces of that PR to the SDK kernel so I'm planning to work on that in the upcoming days Awesome Good Cool Also Denise, how is the fan out plugin upstream in Korg NS guy? Oh, I've asked guys from Korg NS about moving a fan out plugin and they don't mind to make it as an internal plugin so I have provided VPPR and I need to complete requirements for new plugins Cool So this is awesome, because this means that in the not too distant future we'll just be able to use off the shelf Korg NS instead of having to build around Oh, UDP What do you mean by UDP? Is it TCP or why UDP? Oh, it is the transport protocol for DNS The default should be UDP from Auto, I remember Oh, it can use a TCP for transport Yeah, so basically usually you use UDP for DNS The three transfers I'm aware of for DNS are there's UDP, which is what you usually use There's TCP which doesn't get used as often Yeah And then there is very recently DNS over HTTPS and so we probably want to make sure that all those protocols are supported but but basically we want to look at what they're doing probably in something like forwarder to see how they're skinning that I mean, my question was I would assume that we by default not only UDP not that we need to add support for UDP but okay, fine, these are details probably Cool Is there anything else that folks want to call out on the in progress list, since we're running a little close on time? I'm a bit late with the metrics still working on it plan to update my pull requests today so my next idea is to put a matrix into the path segment so I'm changing my pull requests in some areas it's easier and less cold in some areas it's a bit more not so easy but I think in general we'd be better to put the metrics into the path itself One of the nice things about putting them in the path segment itself is that a pod always has the option of refreshing a connection at any point so yes, there are timers and it needs to refresh before a certain place, they can always refresh early and that can essentially allow a pod to pump metrics if need be so if you think that something is wrong you can pump your metrics and find out what the metrics are telling you Okay A cool idea if putting metrics into the path it's what the metrics will be propagated to the client and back to the endpoint so all the items will know the metrics Cool Shall we close and have some time to switch to the next call? I'm cool with that if everyone else is Cool, all right, catch you guys on the next call Okay, see you Bye