 also. Cool. Thanks. Great. So welcome to the network service match meeting. We have we have three general meetings. We have this meeting, which occurs every Tuesday at 8am. There is a NSM document meeting, which is going to be switched to every month. And there is a use case meeting, which has been postponed for July, since people running it have been have been out for various reasons. But we'll reconvene on August 13, which is approximately two weeks from now. We also participate in the CNCF telecom user group. The next one will occur this coming Monday at 8am Pacific time. And does anyone know the status of the CNCF networking working group? I heard that those were we're not running at the moment. I'll check up. I don't know when we're the other. I'll check up on that. We we also have events coming up. We have dbtk users dbtk user space in Bordeaux, France, where we'll talk about the state of dbtk. We have a potential talk in September 19th or 20th with OpenCore Summit, depending on depending on Prem. Prem is not on today. We have ONSC Europe where we have a extensive schedule. And that includes discussions on CNFs, discussion on the CNF test bed, kernel based hoarding planes, service mesh interface, interoperability with NSM. We have the CNCF telecom user group, which will be run by Dan and Dancone and Taylor. We also have a tutorial on driving telco performance using the test bed. Please pre register if you intend to go. We have embracing cloud native on the path to 5G, which is a panel discussion. And I'm sure there'll be something else that'll pop up. We have cloud native revolution by Comcast Labs in October 8th. So if anyone intends to talk there, the call for paper closes on August 16. There is Open Source Summit coming up in October 28. The call for papers is already closed. We are waiting for notifications this Monday on whether Nikolai and Radislav and Ivana's talks have been accepted or not. There is the Edge Congress in Austin. And the schedule, I see, the schedule announced with a link. So there was a schedule, I don't know if you have anything on NSM, but it doesn't look like it. But it still may be useful, especially if you're in the area. We have Isticon, of which Ivana has a talk accepted. That'll be in Sofia, Bulgaria in November. We have CubeCon in late in November 18 through 21 coming up, which the call for papers are already closed. We have multiple talks that have been submitted. We will get notification on them on September 3. And then the schedule announced on September 5. We also have co-located events and CubeCon going on with service mesh con and EnvoyCon. So those occur the day before. So EnvoyCon already has any already has their call for proposals closed. Is there any talks that were submitted to EnvoyCon that people are aware of? I think that NSM and Envoy has been submitted by Tim, if I'm not wrong yet. Can you correct me here? To service mesh con? No, EnvoyCon there was Oh, yeah, I believe I believe that's true. I can go check. I can just go check with Tim about that. But that'll be in Switzerland. Yeah, it will be Tim. Okay, I can definitely double check with him there. I know he was looking to do something there. We should probably work on some submissions to service mesh con as well. Cool. So the deadline for service mesh con is September 16. So we still have a little bit of time for that. Sorry, I'll get sturdy. I got this one wrong. Okay, well, one month. Okay, one month. Practically eternity. Yeah, I'm done till it's like right now. And again, we have the edge computing world on December 10, or 11, and now to view at the Computer History Museum. I will reach out to the guy who's running that to see if he has further information or is that actually happening? Um, there's also a recurring call for demos on the Kubernetes community meeting. So we should consider doing something there to I just start to close in on the cubecon cycle. And any events that innocent has a presence on please add to the crazy bring it up here or add to the website. And with that, Lucina, you have the floor. Great. Thank you so much. With the announcement of the Open Networking Summit in Europe being announced, it was easy to find a lot of content to post to the end service mesh Twitter account. So posted and retweeted 35 times gained 27 more followers, followed a few more related folks. This week's plan, I'll continue announcing the ONS EU events I'm doing one a day. So I'm near the end and towards the top after each event, you'll see the word tweet with a link. And that's the link to the network service mesh accounts tweet. So anyone's welcome to retweet share from there. Those tweet links are awesome. By the way, I've already availed myself of them a couple of times so far. Yay, good. Yeah. Also, like the 27 followers in a week is quite a lot. I mean, we're throwing my leaps and bounds. You're doing an excellent job. Thank you. Yes, that was a surprising number for this week as well. But we've got a lot of cool things going on and exciting events to look forward to. So I'll keep on posting about our events. And I'll also share the zero one O release announcement when ready. Are there any other items you'd like me to look into for this week? I think you're doing an amazing job. I think that's the stuff I know about. Yeah, I mean, I would like to use this in my immunity as a I would like to use this as a kind of move to the next topic here. If you don't mind losing. So I will start to start to start running this age of friendly. If we want to call it like this probably not the word group called that actually. Okay, as I wrote here, it should happen every second Tuesday, 10 to 10 30 am CT that would be like 4pm in Shanghai. And the zone there time zone there. So I guess it would be I mean, more, more, more friendly to the potential users and people interested in a SM that are more in Asia. So I would like to ask you sooner to help me setting this up from what you know, and probably announce it also on Twitter. Because, you know, I may end up sitting all alone for half an hour talking to myself. We do have some folks I don't know if we if they're attending today, but we do have folks who routinely attend from Huawei and some of these others. So yeah, yeah, yeah. So basically, as soon as there is a, as soon as we've actually got that up on the calendar and up on the website, and we get a tweet promoting it. I'm I know a bunch of folks in the direction of Asia who would be super excited, I think to share a meeting on an SM that is actually in their time zone. You know, they're not staying up until 11 o'clock at night to attend. So I think we can definitely get some promotion behind this, but let's let's get it up in all the usual places and get a tweet going and everything else. And thank you so much for being willing to do this. You know, it's you'll for those of us in North America trying to do something at 4pm and Shanghai Thai is time is unbelievably painful. It's hard to say because you're always up so late. But I'm hoping that 10am is not terrible for you. No, the name is perfect. I think that'll be 2am my time. So maybe I'll make the occasional special period. My idea is primarily to gather feedback. And someone is interested in this question can a specific topic to try to bring, you know, questions or whatever is needed to the workgroup call later. So what I would like to ask is I'm not sure if I have access to the right calendar, etc, etc, just help there to set this up properly. And if needed, I cannot any PRs to the site or whatever is needed. But especially for the calendar and you know, all these social media things, I would definitely use can use some help. No, that's all good. And I mean, you should have access to the calendar. But there's no reason that you would remember how to go add things to it. And, and so I'm happy to help you on that front. And then yeah, just a PR to the website would be great as well. Okay, so if if if anyone that's on this meeting on this listening, or is listening to it later, please, you know, spread the the news announcement, and just people that are interested from this time zones to Okay, also any suggestions about the format or whatever, I guess we should be able to use the same zoom like this one. Yep. Yeah, okay. That's it. Yeah, just to just to be clear. So this is, is this every second Tuesday of the month? Or is this every second Tuesday as in every, every other Tuesday? Every other Tuesday, probably it's mine. I mean, I just want to start next Tuesday, then skip one, then, then I mean, I don't like this every second Tuesday of the month, etc, it's just like every other Tuesday, probably is the proper one. Cool, just one clarity. So second other. Yeah, okay. Thank you. That's that's, that's all kinds of awesome. Let's see how it goes. Yeah. So for those of you who are up at who are currently up and it's really late at night, invite all your friends. Yep. That sounds awesome. Party in the police place. Awesome. Cool. Okay. Awesome. So I wanted to start highlighting a little bit some of the stuff in progress and some of the specs that people were putting together as part of this meeting to get more people looking at them. Because we have a lot of exciting things happening. And we also have a whole bunch of PR isn't going on and issues going on. So it can be a little bit difficult if you're just trying to do a linear scan to figure out where all the pieces are and what interesting things are going on. So I wanted to go ahead and take a few minutes if that's okay and sort of highlight a few of the things that I know are happening and give folks the opportunity to highlight things that they may know that are happening that I have missed to the linear scan of the issues and PRs. And then to point a little bit at some of the specs that are floating out there that could definitely use additional comments. That sounds good to folks. Yep. So and I wanted to sort of give the folks who are working on the stuff an opportunity if they would like to speak up a bit about the stuff they're working on. Now I know not everyone is super comfortable speaking up in meetings. So it's perfectly fine to sort of demure and I'll hum a few bars. But I did want to get folks the opportunity to talk a little bit about what they're actively working on. So the first one up is DNS. I think Denise is working on that. Do you want to say a few things, Denise? He may be muted. Yeah, probably yes. Okay. Yeah, I mean, so I'll go ahead and hum a few bars then if he's not wanting to speak up or having trouble with meeting. The basic idea behind the DNS stuff is that one of the things we're going to have to be able to do is if you start a pod and you connect to a network service, obviously your Kubernetes DNS has to keep working, but you may also need DNS from that network service. And so the DNS spec sort of talks about a way to do that. After consulting with a bunch of folks, including some folks from the core DNS side, it turned out that probably the best way of doing that is to have a small DNS sidecar. And according to the core DNS guys, it's basically like a 10 make hit. So it's a really small thing that can fan the DNS out to normal Kubernetes DNS, but also to different network services that have indicated that they have DNS is to be created as well. And so if you can imagine a query goes out, it gets fanned out to all the possible folks. And then the first response that comes back that is not a negative response gets returned to the workload. And so Denise has been working on that bit by bit. And I think he's got, you know, a couple of PRs that are going by and he's got one that's actively out there that's being reviewed. Wanted to bring that to folks attention, just in case they were interested in this area, they wanted to comment, review, you know, get involved, etc. Okay, is there anything specific here in the spec that you want to point out to? No, I mean, effectively, I sort of set a few things going by about sort of the idea behind it, which is just sort of multiplex DNS. But the net net result from a user point of view is that if I connect to a network service like a VPN, where I say I've got DNS inside my corporate internet, then it will result in me getting proper DNS service from my corporate internet in addition to getting it from Kubernetes. Or at least that's the intention. And also it supports all kinds of cool features here, right? Yeah, true, true. I mean, this was a really quite excellent work that was done by by Denise, and he's trying to break it up, I know, into sort of smaller manageable pieces. So I don't think it all is fully turned on when this when this patch lands, I think he's breaking it into sort of more manageable, more reviewable pieces as we go. Perfect. One thing that I wanted to add here is that during the discussion that we kind of me, Ed and Fred here at the KubeCon Europe, with some members of the Kubernetes teams, that actually this could be very useful for, like, as an experience for keeping a sidecard DNS could be useful for other projects and other initiatives. So to definitely have a much larger impact than the same only. But okay, we're doing it for ourselves primarily. Yep. Yeah, I'm happy to shop this around as well. You know, I'm pretty sure that when we start to approach other service meshes or even just the multi cluster federated use case that something like this will be immensely useful. Yeah, no, I could see all kinds of utility for it. I mean, it's it's really interesting. One of the things that sort of thematically coming out of the work that we're doing here is taking everything down from a cluster granularity to a workload granularity. I think this is just an extension of the same thought. So, you know, it's actually quite funny in talking to some of the more senior communities, people like the very first thing that brought up when I first talked to them about it, what what are you doing for DNS? So it's definitely an important thing. Cool. So next up is security. I don't know if you want to say a few words about security. Ilya, do we have Ilya on the call? Yes, hi, I'm here. This PR is provide just secure internal gpc connections, without any tokens or something. But I have separate PR that has all the security stuff. Just like proof of concept, I think. And now I'm breaking into pieces. So I mean, one of the things that's exciting with the security is, and this I think is true for some of these things, quite a lot of these things, you sort of have to go and figure out how you're going to get there. By the way, I think if you go look, you're looking at the original document and there was a second one that was done. But the, a lot of these things takes a lot of code to actually figure out what you're doing, right? Because there's a lot of innovation involved. And so, for a bunch of them, and I know that Ilya did this for security, he went and sort of did the whole thing soup to nuts. And then, you know, you look back at what you've done, and you realize that you've just written a really, really big patch. And so he's been kind enough to try and break it into pieces to make it easier to review. And so what are those, you're on the third piece now, Ilya? Oh, so what? You're on the third piece of security? That's true. Yes, that piece. And this isn't being kind of cool, because we are using sort of industry standards, spiffy spire kinds of approaches for the most part for this. But one of the things we're actually doing that's kind of cool is around provenance. Because you can end up passing through a bunch of network service managers. And if you're trying to decide whether or not something should be admitted, it's not enough to know, do I trust the client? Because the in the intermediate, you know, network service managers can have changed the message. You you may want to evaluate whether you trust the cluster that it passed that whose hands it passed through and other things. So there's a little bit of cleverness with that for provenance that's kind of cool. So if folks are interested in security, that would be great to go get involved with reviewing and stop. My favorite into the main. Yeah, no, do you want to say a few words about it or main item? I just don't know what to say. That's right now interdomain is already working between different clouds. So we just need to split it into a few smaller pieces. So I had a set of questions regarding, okay, documentation is one thing, but then if you can at least share with us now, your experience, like, have you tried, I don't know, packet cluster to Google cloud cluster, or have you tried, you know, different, different clouds? What's your overall experience with setting is this is complex to set it up, like just more, more general description about your experience of Yeah, I've already prepared integration tests, which connects packets, Google cloud, Amazon cloud with each other. And it works from one side to another without any problems. But how much manual work? I mean, I'm trying to look slightly further after this, like, if we have to sell this feature to system administrator or system integrator, someone that is supposed to deploy such kind of services, how, how complex is it to to set this up? Is there any manual work involved? How much is it something like this? We just need to start proxy and a someone each side. And like client side should put destination address into an network service name. And that's it. It should work. Okay. Okay, that sounds I still am I'm not sure who is who is setting up the DNS, but I guess that this is something that we can discuss. Yeah, right now we're just using like system like DNS, or you can put IP address there. Right now it's working with IPv4. But I think next we can improve it for IPv6. And also, you can easily add your own DNS resolver into the project. I think already put that place into documentation. Yeah, that'll be fantastic. Because that means we'll be easy, we'll be able to easily integrate this plus the DNS fan out. Yeah, no, I think that gets to be super cool. And obviously, for this to actually be secure, the security stuff has to land as well. So there'll be some interesting stuff when that happens. But I'm super excited about the inter-domain stuff. The other thing that's going to be interesting with it is, you know, sort of, it's going to be interesting seeing the kinds of interesting things that people try and do with it. So there's a lot of good room for experimentation. It's super exciting. And I'm hoping to see folks playing with it more once it lands. Cool. So once we get those three landed in, what do you think do you think it'll be safe to say that we have a, that we have a federated Kubernetes networking use case that is that is solid, or do you think there's any other tasks that need to be done between now and There may be one last task that we may want to do after the security stuff lands around authorization policies. You know, basically allowing people to configure authorization policies. We might want to do that because right now it's set up in such a way that your authorization could be done by your network service endpointing your client, but that's asking a lot of network service endpoints and clients. And so you probably want the option of having that be policy that's provided by your network service mesh. Does that make sense? Yeah, that makes sense. So even getting the functionality down, I mean, we're going to excite a lot of groups. So Oh, no, it absolutely is. It's all kinds of exciting. And we should definitely look at how we want to go about promoting it. And that was actually one of the reasons I wanted to talk about the things in progress right now, because there are so many exciting things happening by if you're just, you know, if you aren't real close to the ground on the PR, as you might have missed them going by. So on the share, I'm only seeing like a very tiny sliver of the document right now. I don't know if anybody else is having that issue. Yeah, same for me. On the screen share. Yeah. Yeah. Yeah. Better now. Much better. Much better. Yeah. Yeah. Okay. So next time you're good. Positive. I was gonna say we have increased plugability, but I'll let you go on. Yeah, the next step is the increased probability stuff. And this is sort of the observation that network service manager is getting kind of big. Victoria, I think this was something you were spearheading. Do you want to talk a little about it? Sure. The main idea here is to move all cluster specific logic from an SM. So we want to get rid of this hard dependency between Kubernetes sidecar and an SM. Right now, Kubernetes sidecar is started on 5,000 ports and an SM should know about this particular port. And we want Kubernetes sidecar just register in an SM. So there is no this hardly. And also we want all cluster specific stuff be in this sidecar. And if you want to move for some reason from Kubernetes, we just need to replace this plugin and implement another plugin. So that's it. We started with moving exclude prefixes logic. And we'll continue with moving register logic, moving it and define it as per plugin. Yeah, for those of you guys who are a little less familiar, one of the things that were service mesh does, particularly within Kubernetes is it will avoid colliding with the prefixes that are in use in the cluster. And so breaking that out into sort of a plugin. And I think currently, from wrong, Victoria, when you say plugin, you mean things sort of in the same spirit as what's done with CRI or CSI and Kubernetes, where things register themselves with gRPC and the system in that were service manager interacts with them with gRPC. Yeah. Yeah. Yeah. Yeah. So not a plugin in the traditional sense because Go doesn't usually work that way. But but more in the Kubernetes sense. And hopefully this will make network service manager a lot simpler and make the entire system a lot more flexible and modular. So and I know we've had several people speak up for flexible and modular lately. So I think that's probably good. So then the other one is, Ardon, I think you do you want to say a few words about the starting the first stuff you're doing to start on the SRV6 support? As you already noticed in the document, I started preparing an ASM for more than one remote mechanism, because right now we have only VXLan. So I'm ready to start preparing everything for IPv6. But just started. Yeah, no, it's early days. But it's super exciting. I know there are certain people on the call who sure remain nameless. So I've been very interested to see SRV6 for a long time now. I don't know. I don't know. You're talking about Ed. Yeah, I know you have no interest. The other thing that actually came up, this came up around some minutia around VNI handling for the inner domain stuff was, I think, Nikolai observed quite correctly that we probably want to figure out a way to move some of the things like VNI selection down into the NSM data plane, what I think we're now calling the NSM forwarder, which rather than having that done in the NSM manager, because it makes the system much more flexible and pluggable, we probably are going to have to make some small changes to the data plane API to do that. But we'll try and keep that to a minimum. But that issue literally is 1411. That's literally a conversation that started this morning. And so definitely if you're interested in how that works, please jump in on that issue. We'll do. Cool. And then the kernel forwarding plane stuff. Do we have anyone to speak on that? I think someone added that so I don't have the top of my head. I will. I was perfect. Yeah. So this is an effort that we've been working for some time now. Raul Slav has been working, he's not on the call. The idea is that we wanted to use kernel only primitives to be able to connect pods with kernel only of course interfaces. And this initial PR is just adding this capability without really testing it and doing much with it. There will be a follow up that will essentially make it run alongside VPP and be part of the CI. The idea is that, for example, local pod connections over network over kernel interfaces will be handled by it. And it will help us refine some things around supporting multiple forwarding planes, which will be helpful later when we want to test it. For example, if you also help kind of in the course of this work, we cleaned up some things, allowing for adding more or less flawlessly another forwarding plane. So when other guys come with other ideas, and we know that there are some threat. This is essentially something that we believe will be used as a one example of how to add things on top. It shouldn't be underestimated how awesome this is a choice of work, because the kernel forwarding plane stuff is both useful and an excellent foil for going through and shaking out the issues with supporting multiple data planes. And multiple data planes, particularly multiple simultaneous data planes is going to be hugely important as we start moving to handling for next. And it's sort of literally the simplest possible thing you could have done that is interesting that actually shakes out all these problems and gets them fixed. So it's a really good choice of thing to work on. So yeah, I mean, I think that it's an almost final state and all that Andre did some review today, someone else yesterday. If someone else is interested, please go at your thoughts and I think that we should be able to do this week. If everything goes fine. Also, cool. Do other folks have things that they have in progress they wanted to raise here? I mean, as always, feel free to add your stuff to the agenda as we go. But I do want to make sure that if we've got other people who are working on interesting chunks of things. Oh, I actually, I just remembered one that I should add, I've been puttering away at some SDK stuff that I should probably link in here. Including the thing that I'm most abused by with it is that I've actually done it in a way that is taking tracing internal to the SDK so that you can trace through the different elements that you're using. So let me get that linked in here. And that that ends up giving you like cool stuff where you can sort of see the trace through the internal components as well, not just through the external gfbc calls, which at least is making me very happy. Maybe I just have an unnatural affinity for traces. Have you guys seen the way I'm pretty sure you do the way Kialis visualizes the flows into a mesh? The way whom? Kiali, the tool that the Kiali tool that you leverage to validate what Istio does in a mesh. Istio, I don't think so. I would be wonderful if you could provide some pointers. Yes. This is actually that's what I posted on the on the slack is having that same visibility into an NSM mesh would be insane. Okay, that's super cool, because we have had a lot of people think various things about how to add visibility into network service mesh. We've got some metric stuff that was worked by Matiu. And we've also got people who have used a bit about using IOAM headers, and that kind of stuff, but more more stuff to learn from would be excellent. Yeah, we also gave you about working on the SMI, which is more or less also, I mean, at least for for the first attempt, we are mostly looking at she's looking at the metric part of it. So that's, I mean, kind of in the same direction of trying to be able to get more information and probably present it in some format to the outside world. We I think that that's a lot of people if not everyone here, understand that this is a very, very, very crucial point for the project to succeed. I mean, being able to see what's going on is one of the things that people are looking can the mesh is at all. Cool. Okay, there is no active development about using Nick hardware or specific SR IOV features. So it has been a list of things we intend to do for a while now. And, you know, so effectively, part of what it comes down to is getting it specced out and getting someone who's interested in working on it. Is that something you would be interested in working on? Yes, I've started looking at how I can from the SDK, embed specific Nick or things like this. Okay, cool. That actually raises quite a bit the priority since you're interested in working on it, sort of writing some stuff down. We should definitely chat offline, because I have some thoughts there and I'm sure you have some thoughts there and it would be good just to sync up. Also, putting together the spec doesn't mean that you have to be the one to implement it as well. So there's value in even coming up with just a spec. But if you intend to implement it, fantastic, you know, we'll take as much as you're willing to give. Are you talking about the gateway type of? Yes, of course, this is one of the main use case. Yeah, we're doing something similar for the CNF. That's bad. With Taylor and Michael, at least we tend to. But it's not as formal as whatever you have proposed, like it's more or less manual. And this is not great for the time being, but at least maybe we'll help pave the way for the development. But I know that there are various ideas here, like I know that it has proposed something like we can consume the external hardware as a service. Then there is this spec about the gateways and which kind of tries to mimic the approach for what Kubernetes has in Greece. One of the things you might want to take a look at, Matthew, and the stuff you're looking at for gateway stuff is go take a look at the editor domain spec, because I try to leave architectural space in there for what you were talking about doing with the gateway stuff. In the sense that you can have proxy network service managers and proxy network service registrars that can link up with whatever you're doing for gateways in terms of data play. Does that make sense? Okay, I have a look. Thank you. And if you have any questions, let me know whether or not I actually fully gave you all the stuff you need for what you have in mind is a good question. But I at least tried. So cool. So specs to review, we've got quite a few specs that are sort of running around right now that I think folks would like to, in addition to the ones that are actually somewhat in progress, where input is welcome. One of the ones running loose was there's a general desire to right now the way a client connects to the network service manager involves it connecting over a Unix file socket. And the Unix file socket has been injected via device plugin. And so the idea here would be to have once security lands to explore using TCP to have the pod connect to its local network service manager. And so there's sort of a spec looking at how we might go about doing this. And so more eyeballs would definitely be better there. And that would simplify a bunch of things potentially. We would still definitely want to use device plugin to inject things from MIF cross connects. But we might then not need it for non the non MIF case. Yeah, also, I suppose for my case, we could have movies logic to, for example, VPP based data plane only. So it was also possible. And security, we could have just one similar to Kubernetes, the Unix socket to connect to an S manager. And if the client needs MIF, I will request from the data plane workspace. So it's also, I suppose we discuss it a bit sometime ago. But we could think about it a bit more and probably write anything that makes the system simpler would be great. I mean, there is some trickiness involved. But anything that makes the system simpler would be great. So all right, then we have the SMI. Yep, you want to say a couple of words about this? What's going on? Yeah, the the spec it own is not up to date with the latest thing. But I'm currently on the matrix part and the Prometheus integration. We had some discussion with Radoswaf and Nikola sharing the approach at auto and what we agreed on that will have well, representative metrics is to match them with the client namespace is it's unique so that you can make queries in Prometheus and it is it will be well integrated with the SMI requirements. And for the other part of this, we will need to integrate in the community there. I'm still, I see some movement there, still waiting for them to announce some weekly meetings. So that for these specs, we need to add network service route because they're working with HTTP only for the moment. But this is not the first focus. We are first focusing on the matrix part. I will I'm going to update this because I didn't update the spec. Very much. Okay. And the last of the things here is postulation, we're not aware of this. Yeah, this is a spec. It's been sort of bouncing around for a while. It's actually, it's kind of interesting. And this is maybe a candidate for something to look at with some of the plugins stuff as well, that we discussed earlier, because there's effectively when you sort of get down to it. What this guy is actually looking at wanting to do is to be able to plug in a different selection criteria among candidates other than round robin, right? So round robin is basically wonderful in that you can do it really easily. But there are lots of more sophisticated things people might want to do to select an network service in point. And and so this was sort of looking at modularizing the selection process so that you could bring other considerations into it. Because the idea behind that, I know Bhutanash is a colleague of mine. And the idea behind that is for now to rely on on the metric reported by an SM, in order to, to create a cross connect with endpoints that are less loud. This is one of the first idea. But of course, we can expand the concept. And I'm actually interested in this in two, in two levels. The most interesting level for me is making this modular enough that people can easily experiment. And then the second level of interest for me is the kinds of cool things that would result from those experiments. So it's all about wanting to make it much, much easier for folks. So I wanted to sort of draw people's attention to this, and get some more people involved in commenting on it because it is a very cool idea. Okay. Cool. Yeah. I don't know if we have time to share things here, but yeah, on the examples, reposite, we have more and more examples there. And I have recently added a number of improvements just to make life of the people that are using examples there. So you can list the examples with some simple descriptions. You can browse, browse the, you can actually call for a description, which essentially shows the specific examples read me if I will just to make, to make you, to make you select whatever you want to do without really running it and going browsing the code. I can only do this. You can use only on the code. And then I am working on this universal CNF, which is a thing that we try to prepare for CNF testbed. And it's essentially, I don't know how to describe it, a config map driven VPP agent content. Does that sound meaningful? The idea is that you have a single container and then you write different config maps and it behaves differently like it can implement the bridge, it can implement the router, it can implement whatever is needed there, or at least whatever the VPP agent can expose something you might want to take a look at that sort of comes to mind is because I do like the notion of config map driven. Go take a look at how core DNS does its config its configuration stuff. I think it may map very well to what you're doing with the universal CNF. Okay. Okay, I don't know for sure that it will fit. But but they they also have a situation where they sort of have they're in their internals are very modular. And they have different pieces that configure them that you can bring to bear there. So okay, I will look at. I will thank you. Cool. And Nikolai? Yeah, does this mean that the CNF testbed will by default embed an SM? I think that this is one of the use cases that we are looking to enable their SM. Okay, but also use case within the CNF testbed will will be able to use an SM? That's for Taylor to ask to answer. I'm just trying to help them enable it. Could be great. Yeah, hi, this is Taylor. I think we'll probably have other options in the testbed, specifically because other things are being worked on and anything that someone wants to contribute. I do believe that most of the upcoming use cases will be connected to all the service chains and driven within SM though. So I think it's going to be a core part even if there's other test cases that may use other options. I believe that the CNF testbed is our, at least our concurrently our biggest chance to prove that SM is a fit for the telco dynamic infrastructure use cases where you can bring up your own slices and you can reconfigure things at some time, even things like that. So yeah, we should use it to the best of our capabilities. The upcoming events that open network summit and kubecon are all around test cases that use an SM and the CNF testbed and the other event that was mentioned was that tutorial for using the CNF testbed and ideally, ideally during that tutorial, bringing up and having an example test case that people could run would also be in SM. So we're looking forward to it, especially for the more complex use cases. We've kind of been holding off on implementing those because they get pretty complex. If we're doing it essentially manually, or you could say out of band, a lot of custom scripts and stuff to do all the connections is how we've been doing it. Okay, great. Okay. I guess that's it for today. It's been kind of an eventful meeting and we're right at the top of the hour. So awesome. Talk to you later. Okay. Anyways, we will see you all next week. Yeah, bye bye. Cheers.