 Frederick, shall we get going? Yeah, I think so. So I'm sure that someone has already put the link in but if you've not already added yourself to the meeting minutes here please do. Great, if we can get someone to share the agenda as well, that'd be good. And with that, I will go ahead and get started. So welcome to the Network Service Mesh call. So we have three calls that we generally do. We have the NSM DOC, which is currently on hiatus until Jeffrey gets back. We have the NSM use cases, which the next meeting will be on August 13th. And we have the CNCF telecom user group. Sorry, the NSM one was today, 13th. They look strong on this one. I'll ping the people running that to see what we can do with that. We have the CNCF telecom user group, which occurs every first and third Monday at 8am. And there is a CNCF networking working group, which occurs every two weeks on Tuesday at 9am pacific time. So we have a few major events coming up. We have ONS Europe, which will be in Antwerp, where we have four accepted talks, the telecom user group meetup, and a CNCF testbed tutorial. We also have the open source summit coming up in Leon, with a talk accepted by Ivana and Radoslav. And we have KubeCon plus ConNativeCon. So for those of you that were not here last week, we have a co-located NSM event known as NSMCon. We have listed the call for proposals and a pre-register tweet that's been posted up. And so we also should be having a few talks at KubeCon as well. So we'll post the agenda when we get it. For the proposals for the NSMCon, the submissions will close one month from now on September 13th. So get all your friends to submit. And with that, do we have Lucina on? Hello. That's awesome. Thank you so much for announcing some of those things. It's in line with what Twitter account was doing this week. So let's see how we're doing. Last week, we had 352 followers. This week, we have 360. We're following 1647. And this week, 1670. And last week, we had 342 tweets. And this week, we have 366. Awesome. And I sent out the reminder for today's call. The NSMCon registration, the initial release announcement, shared the blog post from VMware open source about part two, network service mesh. There's a network service mesh intro at open source summit in Europe and shared again the CFP for NSMCon. So this week, I'll continue announcing the ONSEU intro and the OSSEU. And once the QubeCon NSM events are posted, I think that's probably in a few weeks yet for QubeCon talks, but I can definitely keep promoting the network service mesh con and the CFP call for proposals. And whenever we're ready to do zero to O release, please let me know. And I was really quick. It was on a Friday when I noticed the V010. So I put it together pretty quickly. This next one, I'm happy to put together a draft to make sure I'm capturing all of the new functionality that you'd like to share with the world. Cool. Yeah. So with that, that actually brings us to a short announcement, which was the V010 branch was finally tagged. And so that should mark the 010 release for people to start testing. So of course, this particular release is, I want to say, between alpha and beta quality. So I'm not saying don't run into production, but if you were very brave, if you do. But any feedback we can get, of course, we appreciate. And all fixes are going to go into master. So we're not going, 010 should not, it's not going to be a point release. It's not going to have any point releases like 01012. And so with that, we should move on to the stuff that is currently in progress. Ed, you're listed. So you have four. Yeah. So I apologize. I'm just getting back from PTO. So I haven't had a chance to go through and groom the outstanding PRs. But when I was last here, we'd walked through the stuff in progress. So I wanted to get the community a bit of an update on where we stand on the sort of things that we talked about then and make sure that we capture any new stuff in progress. And the same is true for specs and review. So I know when we last spoke, we talked about the DNS work, which was in progress and was being broken into smaller PRs to be merged. And so, Denise, do you want to say a few words about where we stand for the DNS work? I think you're muted, Denise. Yeah, it seems so. Okay. I think that's mostly in at this point. Correct me if I'm wrong, guys. There may be one Patrosos still out. Yeah, I need to review the latest pull request. And since it should be complete and ready to be merged, if all will be fine from our side from reviewers. Cool. And then that's actually very interesting because what it lets us do is if you have a network service and the network service is part of what it does also provides DNS, then the pod can receive that DNS service as well as whatever DNS it's normally getting from Kubernetes. And that actually works if your pod is consuming multiple network services as well. So that's super exciting from a usability point of view. The other thing that was in progress with security, and I think, Ilya, how is that going? Oh, it's going fine. The third is radiant. It should be reviewed. It's already been reviewed by Nikolai, I think. Okay. So that'll bring in, yeah, yeah. And that'll bring in sort of standard spiffy spire kinds of security, plus some really interesting things around provenance so that you can make sure that not only do you trust the guy asking for the network service, but you trust the various intermediaries that have been collaborating on providing it. Cool. So, Artem, how is inter-domain going? First part of inter-domain already been merged. And we are almost to the point when all inter-domain will be done. Cool. What kinds of stuff is still outstanding? Almost all functionality left is testing. Okay. Cool. I know you'd identified a little problem with how we're assigning VNI's that's being sorted, but it's the kind of thing you normally discover when you expand functionality. A quick question for folks on the call. How many of you guys understand what inter-domain means? Okay. Because it's incredibly cool. Effectively, what it means is that I can be running a pod in a cluster in someplace like GKE and it can consume a network service that is provided by a cluster in AKS or some other place and vice versa. So, it means that we can actually do network service domains that smear across multiple clusters and potentially across multiple different environments. It sort of finally frees us from a particular cluster in terms of networking. It allows us to provide network services quite generally. Cool. Yeah, this is Brian. So, I'm the first time attending this in a long time. Hey. For that inter-domain then, is there a way of understanding the like the latency and like if it is? I have no idea. I got a lot of reading to do so. No, that's a really good question. So, I think we've got some other people in the community who've been looking at that kind of problem. Like, Matthew, I think you've been looking at some of the metric stuff here. Not so much about latency, but in other directions. Do you want to comment a bit? Yes, I took a look at it, but mainly to report the latency and have some kind of metric to rely on, especially for the scheduling of the new pods or for the scheduling of new network services. Yeah. So, it's definitely stuff folks are looking at. The current internet I don't think quite gets to where you want to go. But if you've got folks, if you would be interested in looking at that or have folks interested in looking at that, it's definitely well understood to be a problem we also need to solve. So, thank you for speaking up. It's a good question. Hi, Ed. May I ask you a question, please, about the inter-domain things? Oh, cool. So, yeah. So, basically, I'm curious about for let's talk about communication between pods across clusters. So, are we going to build a tunnel or between the clusters? So, how would you implement the physical connections? Yeah. So, the way network service mesh in general does this is that we have a dynamic negotiation of total types between a client and something providing it with a network service. And right now, the one that we have built in support for is VXLan. But we've got folks working on SRV6 and the architecture is designed for basically any tunnel type. So, you know, to be agnostic as to the tunnel type. And so, effectively, the idea would be that you could have any kind of a tunnel type that you wanted coming out of the negotiation, you know, but the client doesn't have to understand tunnel types and the network service doesn't have to understand tunnel types. That's something that can be negotiated by the mesh itself. Oh, yeah. So, let's say, every connection is going to be peer-to-peer and point-to-point? Well, yes, between a client and a network service endpoint. So, obviously, you wouldn't want to connect a bunch of clients that way. That becomes but if I had, say, a network service endpoint that was providing a network service and I wanted to be able to have a pod that wasn't running in the same cluster participate in that network service, you could do that and effectively think of it as sort of a hub and spoke approach to the problem rather than a point-to-point approach to the problem. But it's not a bridge approach to the problem because between any given client and network service, you better put a point-to-point connection. That way, you always know exactly who the client is that's talking to the network service. Okay. So, just one last thing. So, for both this client and, I mean, service providers or NSE endpoint. So, will there be a single tunnel? So, even for crossing the clusters, because what I'm thinking is I'm curious about the abilities for, is it possible to build a single tunnel across the cluster or do any, like, say, several segmentations and put them together into form? Well, the fundamental thing is that when you get to the network service endpoint, it has to see individual point-to-point connections for each client so that it knows which client is which so it can do the proper behavior. But, you know, what happens in the intermediate stage can be very flexible. So, a logical level, network service mesh thinks in terms of workload communications, not cluster-to-cluster communications. So, if you want to do something like trunk a bunch of stuff along a single tunnel between clusters, I mean, you could mechanically make that happen. But, when it gets to the network service endpoints on the other end, those network service endpoints have to be able to distinguish the traffic from each client in a straightforward way. But, this is actually super powerful because you almost never care, logically speaking, about cluster-to-cluster communication. What you care about is workload-to-workload communication. You know, typically clusters have way more stuff running in them than you want and so then you introduce all kinds of insane attempts at IP-based policy between the IPs and the clusters and it gets super messy. It turns out to greatly simplify things when you actually focus on what you really care about, which is workloads talking to workloads. Cool. Thanks for the explanation and also thank you, thanks for the note taken. Is that Taylor? Thank you. Yeah, I think that maybe Lucina. Oh, Lucina. All right. Yeah, thank you. Yeah, that one's Taylor. Thanks, Taylor. Oh, thank you, Taylor. Yeah, good notes are a wonderful thing. Oh, sorry. I think it's Taylor just throwing it in Lucina typing. Anyways, we have an excellent team. Okay, no, this is all Taylor. Cool. Yeah, no, it's the really good shorthand chasing is to realize that in network service mesh, we think about everything in terms of workloads communicating to network services, not clusters communicating to things. So we don't weld with the cluster necessarily. Cool. Anything else that folks wanted to talk about or ask about within interdomain? Do we have any good documentation for how people can play with the interdomain feature? Does anyone know? Should we go look? Because it strikes me that it's going to be something people are going to want to play with. I think we need to build a demo around this so that people can try it. Yeah, no, I would tend to agree. Because again, super exciting stuff. So anyone interested in sort of poking at it and writing down the experience so we can do a demo, please speak up, go and do so. PR is super welcome on that front. It should be great fun. Awesome. So we've also got increased pluggability. Do you want to comment a bit about this, Victoria? I know you've got some basic infrastructure in place and you're now slowly making the whole world more modular. Yeah, exactly. We've merged the plugin infrastructure to the master and together with the connection plugin, the first plugin we have. And I'm continuing to work on other plugins, like discovery plugin and the registry plugin. Okay, modularity is a good idea. I mean, what I increasingly find is when you've got a system that's modular, it just makes life much, much easier. I know I think the first one you guys did was to factor out how we figure out the exclude prefixes. And for folks who aren't familiar, that's the mechanism we use to, for example, avoid colliding with the Intra cluster Kubernetes networking. And in the first pass, we sort of built that into the network service manager directly. But it makes all the as we get people interested in doing NSM in a broader array of environments, having greater modularity makes life enormously easier. So cool. All right. Artem, do you want to comment a little on how SRV-6 is going? An assemble already organized to work with multiple mechanisms. So right now I'm working on configuring a SRV-6 connection or a remote mechanism. Cool. I know that we have one or two people who are super interested in that. I don't know if the people in question are on the call today. I don't think so. But there are definitely people who are interested in that. And it's a really good thing because it's sort of our first example of a second remote mechanism. And as we all know, that every time you do the first example, the second thing you shake out bugs. So cool. Do we have Radislav here to talk about the kernel forwarding plane? Yeah. Hey, Hayat. Yeah. So I'm really happy that it got finalized a bit, at least the first implementation. It's already merged. So everyone that is willing to try it can follow the steps and be able to share any feedback. Yeah. For the time being, it's not supporting yet the functionality of adding crowds and neighbors. So this is the thing that I'm thinking throughout next. Cool. No, this is super exciting. I mean, we've always designed and intended for a network service mesh to support multiple pluggable forwarding planes for the cross-connects. But nothing is really real till you do the second one. And it's also good that you're sort of immediately getting people reporting feature requests like please support the routing and neighbor stuff. So that's also really, really good. Yeah. Cool. And then the SDK evolution stuff. So I still need to go back and rebase this PR and get it finished up. This is something where I was making the SDK a little bit more modular and also building into it the ability to trace at the sub-GRPC call level. So if you could just imagine a composite where you have a list of small little pieces of work like connect the interface of the data plane, configure this about the data plane, etc. That's kind of what I'm doing with the SDK evolution. But with the tracing that I've done, you can actually see the traces internally when you go through each of those steps. So it becomes super easy to figure out like not only what's going on inside of the network service endpoint, but also where you're leaking time in the call. And as I'm doing this, I'm sort of catching out things where I think I already had a place where I'm like, okay, clearly there's a lot of contention there. I can literally see it in the trace for that kind of thing. So it might be a good approach to even extend to how we build network service managers at some point, because that would even make it simpler for people to build network service managers for different environments. Cool. Is there anything else that folks have in progress that isn't listed here that we can highlight? I literally just cut and pasted this from two weeks ago when we met. So I want to make sure we're highlighting other people's work that's going on as well. Cool. And then the second thing is we have a mechanism in network service mesh to allow the community to talk to itself that we can refer to as specs. And this is not a mandatory thing. You don't have to go write a spec before you write code. But if you have a thing you think you want to do and you'd like to go and have that conversation with the broader community before you write code, we have our spec board and you can go and write a spec issue. And we typically, what we'll do is we'll people will usually write those as Google Docs and then having done so because the easiest to collaborate on that. And then when somebody goes to implement the thing, they'll actually write down a spec that gets committed to the repo that describes what was really done. And so those specs end up being a good way for the community to talk to itself and for people to have a sense of what people are thinking and where it's all going and to bring in differing ideas on things. And then I listed out a few of possibly not all of the specs that I think are actively out there. So do you want to go back to the list of the meeting minutes? Yeah. And a quick note while we're doing that, something that we could also that we also can do for people is if you have something you want to build that's not going to be part of the NSM repo, but is rather owned by your organization and you want comments and are comfortable with public comments on it, post it onto the spec board here and we will help you as well. Yeah, totally. I mean, obviously, it's been for conversation more than anything. So that won't get committed or anything into our repo. It'll just be a nexus that you can use to get it back. So just a few that are currently sort of hanging out there that would probably benefit from more eyeballs. So we've got the getting rid of device plugin that is currently being discussed. We currently abuse device plugin a little bit and once security lands, we hope to be able to use it a little bit less. So there's a spec here about switching more to TCP for the communication between local network service endpoints and network service clients and their per node network service manager. So that's been sort of laid out here. Anyone want to comment on this at all? Any of the folks behind it want to talk about it a little bit before we move on to the next one? Cool. So the next up is I think of Vana is working on trying to figure out how we would interact with SMI. Do you want to say a few things about that, Vana? Yes, I'm finally involved. There was a Your audio is a little muddled, Vana. Or maybe that's just me. Can you hear me? The volume is really low. Can you hear me now? You're a little bit better now, I guess. Go ahead. Yeah, so there was walking issue there, which is fixed this last week. The cross-connect monitor wasn't receiving update minutes, but it's working well now. And I figured out the the thing that I was considering is how to, I'm currently working on the observability, which is the first part. It's important for any sense of learning for SMI. And it's the first thing to integrate with their matrix SMI matrix. And the first thing I was going to consider is how to form queries in a well-descripted way, so that you can search for a client for type of communication. And so currently it's formed by the name space of the client. Is it source or test and type of packages? It is Erics and TX. And I'm currently forming those queries and I started preparing a pull request for the Prometheus integration. And this is the first thing I'm concentrating on the metrics now. Okay, sounds good. Cool. All right, so then other specs. We have a spec that's been out there, and I don't think we have the gentleman behind it on the call today, looking at, essentially, looking at more sophisticated selection of candidates. Right now, network service mesh has a fairly sophisticated label-based way of selecting a set of candidates for connecting, for what network service endpoints it connects the client to. But once you've selected a set of candidates, we just sort of round-robin among them. And so this spec is sort of looking at how could we be more sophisticated and make smarter decisions. My guess is this is likely going to start interacting at some point with some of the stuff that's going on with modularizing the network service manager and breaking things out into plugins. But it's also cool. And then I think, Matthew, what I don't have here on the list that we probably should is the stuff you're doing on gateway work. Do you want to say a few words about that? Yes, I'm still having a... I'm going to go ahead and add the links for it there if I don't get to it first. Still looking at what we can do. And of course, we can use some device plugin stuff, especially the one that is already in the NSM tree. I'm looking at it, but there are some quite simple alternatives that could be interesting, especially by combining multis with NSM. This is a simple workaround, but it's not in the scope of NSM for now. So what I would like to see in NSM is a way for an endpoint to tell that I want to be an ingress of the service mesh of the mesh or I want to be an ingress for the mesh, especially for VPN gateways or for ingress gateways. Okay. So sort of like a forwarding, sort of like a forwarding plane that gets you in or out of the cluster? Yes. And because for now, there is no way to tell how to consume traffic from the external world or how to send traffic out of the mesh. This is very the work that I'm working on. Well, that's actually super useful stuff. And there's been a lively conversation around some of this. So I would encourage folks to get involved and participate, because there's definitely interesting stuff there. Cool. Anything else that I'm missing from the specs board that folks want to sort of bring up and make sure that we discuss? Sorry, Ed, I had a problem with the microphone. Can I say a few words related to Dines? Yes, please. By Dines spec, I've provided second PR and it's ready for review as I pass it all tests. In this PR, I've provided a solution for case when Sidecar performs a connection. Also, in this part, I've provided tests that NSM coordinates not break default, Kubernetes Dines. So please take a look. After that, I plan to provide final part of Dines for case when an NSM need container performs a connection. That's it. Well, no, it sounds good. I know you've been breaking this up into small reviewable pieces, which is much appreciated. So this is all very good news. Thank you. So I think that's the end of the stuff that I have. So I'll yield back the floor. Cool. So I have one more thing that I'm going to soon add onto the specs board. So that's going to be... So the Makefile machinery has been quite useful to the NSM project. And I think it'll be useful to others who are outside of the NSM project as well. And so what I'm going to propose in the specs board is that we set the Makefile machinery. So everything is in a .mk directory with the exception of the main Makefile itself. So what I'm going to propose is that we take that .mk, we split it into things that are generic and things that are NSM specific, parameterize a few of the things that are NSM specific. And then what people will be able to do is basically copy that, get repository. So we'll split the mk into a good repository or copy it rather that you can then use to import into other projects. So if you have a set of microservices you want to test and you want to use Make in order to drive it, you could say make Kubernetes dash start, make Kubernetes dash your app dash deploy and so on and have everything work in the same way that it works within the NSM system itself. And so I'll write up some documentation on what my ideas are so that people can get a sense of that as well. I don't have any other thing on my side with that. Cool. No, I think that that's, I'm super happy that it's turned out the Makefile stuff is sufficiently useful that we think other people might benefit from it. I know it was sort of born with a frustration of making everything simple and it's a good indication that maybe we succeeded. Yeah, and I think one great part of it as well is it also has decent machinery around getting it to work in other clouds as well. And so and it also sets it up so that in the long run it helps makes things modular because what I would love to eventually see is like the people from the cross-cloud organization has done a tremendous amount of work with multi-cloud scenarios as well. And so it also helps set us up with the groundwork so that we can potentially, we're going to grade back with that work when we're both ready to do so as well. That way we're not maintaining two separate things. So I see it as like a step of making sure that we also don't get too integrated or rather where things get too messy and where that becomes a difficult task. So there's also a slight added bonus in that space. Cool. Awesome. Well, that is the end of our agenda. Is there anything else anyone would like to bring up before we close the meeting? Okay, with that, we will see you all at the same time next week. Thank you everyone for showing up and you all have a great day. Thank you. Cheers.