 So let's go ahead and start off with agenda bashing while people are writing themselves in as participants, which is much appreciated. So basic agenda today, always starting with agenda bashing, then we've got a review of some action items that folks said they were planning on working on from last week. Just want to see where they stand. Then we've got a review of development activity with Frederick and Kyle, plus whoever else has been working on things. I think we got some interesting stuff that John pushed, but he didn't want to talk about it until next week. He wanted to give people a chance to look through it. We got a review of the use case mapping with Prem and Fabian and John. We still have sort of an outstanding question around meeting time planning. And then in the conceptual review, we had a bunch of questions that were added to the agenda. Were these you, Mike? Yes, I added that concept to the question. Thank you. That's very helpful, actually. What did I make sure credit went where credit was due? And so in addition to sort of the various other things that were there in terms of collateral that we can look at, however, folks find useful, I did add, you know, I had a conversation already with Mike where we talked about sort of the VPN gateway case. So I added some collateral for that. And then I attempted to answer some of the questions that you had posed, Mike. So we could take a look at that if folks want to as well. So anything else that folks feel need to be on the agenda? Alrighty then. Let's go ahead and dive in. So first, do please add yourself to the attendees. I know we got people who are currently arriving, and I'll stick the link to the agenda in the chat again for the new folks who are just getting here. Makes it easy to keep track of who's around. So AI review. So on code active, I think you would hope, Kyle, that you would have some CRD stuff out as a PR. Yes. Happy news there. Yeah, definitely. So I did. I pushed out multiple versions of PR54, which is the CRD patch as well. Cool. So yeah, so I'd love to get some feedback on that, you know, at this point it would be. I put it, but you know, it's the problem, I love, I love rendering in Go, but it makes code reviews interesting because when you pull in new dependencies, you know, the patch looks gigantic as it pulls in. I also find that challenging personally. Yeah. Yeah, exactly, it is what it is. So what I did, I tried to at least, you know, put some comments in maybe to aid in review for people who want to take a look at that patch. So I definitely will be available today if, you know, Frederick, I don't know if you want to sync up maybe later today, if you have some time on that and I'm happy to do that if you want as well. Yeah, that'll work. Cool. Can I presume that'll be on the IRC channel then? Definitely, yep. From WorkServiceMesh, cool. A lot of fun stuff happens on PodNetworkServiceMesh, I recommend it highly. It does. So, cool, so that's actually excellent news. So we've got that out, you know, in the process of review, do you want to talk a little bit about what the CRD pull request is doing? I know some people are familiar with CRDs and how we're looking to use them here and some may be less familiar. Right. So we've decided that, well, at least that's what this patch proposes and it would be great to get feedback, but we decided we're going to implement both network services and network service endpoints as custom resource definitions. And the beauty of that is we get to utilize all cube cuddle and the Kubernetes database and everything on the back end. And so what the patch does is it allows us to take our protobuf file, which we already have, and that generates a bunch of go code. Then we use the Kubernetes code generator. So we just create a types.gov file with our CRD definitions, which utilizes the generated go file from the protobuf file as far as essentially the schema for us. So it's pretty slick. The rest of the code is mostly a lot of generated. Then I created a new plugin that essentially is able to run the back end informer for this as well. Cool. No, that's awesome. Thank you. I appreciate it. It's good work. Cool. So Frederick, Kyle had some hope that you would poke it in cluster off this week. I don't recall you being here to commit yourself to it. I'm just curious if anything happened there. I'm currently working on that at this point. So I got sidetracked with a couple other tasks from my day-to-day work. So I didn't get to dive in as deeply as I want. But what I do have is I've already set up a test cluster. And I'm going to create a couple jobs and a couple sample applications designed specifically to test the boundaries of the in-cluster authentication. Now specifically, I'm expecting there to be at least two, I guess you would say, roles. One of them would just be like a standard role, which I assume would have limited access to changes in the API and then I'm expecting that I need to work out how is it possible to gain privileged access in order to make changes. And so we'll need to work out how many privileges do we need and work out what's the proper way to expose that if there's such a way. If there isn't, then what we'll need to do is we'll need to create credentials out of band and then we'll have to inject them into Kubernetes and there's a number of ways we could do that. And so we have a path to make it happen regardless. But best case scenario is cluster authentication with something in the pod spec saying what roles we want to attach to it and then we should be good from there. So I'll have more on this next week. Cool. All right. Awesome. I've already stuck something in the action planning for coming week out of development there. So cool. Use cases. So Prem, you were going to update the B2B VPN sequence diagram based on feedback. Right. Right. So I did update it and what happened? I mean, I've updated based on the feedback, but I'm also working on looking at the distributed VPN, whatever we discussed face to face, trying to see what would be the issues in case if we fit in the BGP we've been into the distributed bridge concept. So that's what I'm currently working on. And I've also captured the pros and cons between the two approaches. So that's the status of the update. Yep. So that's, that's cool. So we'll probably talk a little bit more about that when we get the use case review section and then sounds like I know you and I had the opportunity because I was speaking in San Jose to sit down and walk through the distributed bridge deck. Right. Distributed CNFs. Right. You found that helpful. It may be something we decided to review in the conceptual review section today as well. Sure. Cool. And then, John, did you get sequence diagrams added to your use cases? I did with some health and Prem. I've got a couple of near that point needs quite a lot of input, but enough there to sort of chat about, I think. Okay. Cool. So we'll get to that in the use case stock section. Perfect. And then Chris, I think you already dropped me a note saying you hadn't quite gotten to the day in the life of a packet. Yeah. So as I mentioned, I would like to get just a better idea of the use cases. So I'd understand exactly where, you know, packets are flowing through what also was mentioned, get the definitions down straight, because I think I heard last week overlay, underlay, different end caps and that kind of thing. Yeah, no, I can see how that would be confusing. So hopefully we'll pick some up in the conceptual review or I'd be happy to make some time next week to sit down with you and talk you through whatever would be helpful. Okay, absolutely. Yep. I had a good conversation like that with Mike this week that was at least I found highly productive. So. Cool. Very good. I'll get on your calendar. Thanks. Awesome. Okay, so review of developer activity. Other than the stuff we've already mentioned on CRD pull and cluster off. Is there anything else you guys would like? Well, I mean, so, so I had this is something you and I were talking about. So I think another area is, is, you know, just testing in general. And Frederick, I think you kind of brought this up with what you were discussing earlier, but I think that's kind of the next area that I kind of started to look at most CRD is, is, is doing a bit more testing on some of the existing stuff we have. Yeah, no, I, I, I totally agree. And it's something I've been poking out a little bit too, as I'm, I'm sort of looking at doing a proper device plugin, a plugin. I'm, I'm sort of discovering some interesting and useful patterns. It turns out that, that if you work in sort of this plugin framework, that it becomes easy to write a test plugin that just exercises your plugin and then checks to make sure that the plugin is doing the right thing as part of its activity. So hopefully I'll have a pattern that I can show next week. So one thing we're going to probably run into in the future is when we want to test functionality that requires a Kubernetes cluster to be available. I think that we're going to run into limitations with Travis and Circle CI and so on. So we need to work out a long-term test strategy as well for, for the integration tests. That, that's definitely true. And I actually, so Ed hooked me up with some of the, the, the CICD testing folks from CNCF. I have an action kind of to follow up with them because they, they are, they are actually looking into this, you know, for things beyond NSM as well. So Frederick, I can definitely pull you into that as well. If you'd like, if you'd like me to. Yeah, that'd be a, that'd be a good thing to do because I mean, for a start, the Kubernetes community has to do this testing themselves at this point. And if, you know, where there's also some work that I've been doing, not part of CNCF, but part of the Linux foundation, so I guess the step higher, which may be able to help a little bit with some of this stuff as well. So I think it'd be good to get in for, for me to get into that because it'll help, it'll help me and outside of the network service mesh and in the couple of other projects that I'm working on as well. So perfect. I'll, I'll loop you into that as well. Cool. So this is, this is good. This is all goodness. Um, anyone else have anything around developer activities that they're thinking of for next week, um, or other sorts of things they want to sort of focus on? Cool. Then I'm inclined to turn this over to Prem. Do you want to drive Prem or shall I continue driving the use case stock? I'm fine either way. Uh, yes, you can probably drive it. Okay. Cool. So use case stock. Just let me know where you like. Um, yeah. So, so on the communication workflow, uh, I'm trying to capture, uh, the use cases that have been discussed. So I've added the distributed, uh, bridge use case also because I intend to use it in the L3 VPN, uh, uh, for the digital can use case, um, so you can scroll down, uh, we can scroll down to the top of the. Okay. So we discussed about it, uh, last week further down, yeah, further down, uh, next page. Yeah. Yeah. Sorry. Next page. Yeah. Uh, yeah. Next page. Yeah. Sorry. Yeah. Okay. Carol, please. I am gonna. Are you looking for the, the secret diagram? Yeah. Yeah. Um, so, uh, he had, uh, mentioned, uh, earlier that there can be two approaches. One, uh, you can probably set up on demand. Um, so that is what this use case is going to come. Um, the other, uh, scenario would essentially be, uh, there's a full mesh between the, uh, compute nodes. Um, in that case, uh, what happens is. There's a full mesh VX plan that's created between the, between the, uh, uh, various, uh, nodes. And then you would have the bridge part exposing those channels that's going to be the second model. But here, uh, the last week, uh, what we discussed was, I was a bit confused on whether it's going to be excellent channel or L to, uh, so it is the excellent channel that would be set up by network service mesh apriori. And then once that happens, uh, the part would essentially, uh, be exposing the, uh, L to channel and then, um, uh, the, uh, network, uh, service mesh manager would essentially publish this L to end points onto the, uh, API server. And then whenever other pod wants to have a communication with that of the, uh, product node one, it would essentially use these, uh, uh, L to channel to, uh, talk to it. Um, so, uh, I also see a bit of parallel, uh, with that of the VPN gateway, we can probably discuss more when we get into the VPN gateway, right? Um, so again, just, uh, for people who joined, uh, uh, um, a new to this call. Um, so the, this particular use case is essentially to, uh, create, uh, uh, network, uh, virtualization, uh, for a typical data center, uh, the idea is you would have the MPMS, uh, or BGP, VPN terminating on your, uh, PE slash DC gateway. Uh, and then, uh, what happens is you would have parts, uh, uh, of different tenants being hosted on different nodes. Um, so for, for the, uh, for the internal traffic, as well as the bomb traffic, there will be a VX plan mesh between all the nodes that is set up. And then there will be another MPLS over here, tunnel that would essentially be set with that of the DC gateway, which would essentially carry the LS piece all the way up to that of the, uh, nodes. Uh, this is exclusively, uh, this channel is essentially meant for the parts to talk with, uh, the, uh, external world as well as, uh, talk to, uh, sites which are, uh, present, uh, across. That's the use case. And then as part of the use case, uh, the first sequence diagram is all about, uh, how, how the, how the, uh, uh, channel looked like. Um, so network, if you, uh, if you have gone through the, if you have gone through the, uh, uh, intro document as well as, uh, other documents, it has, uh, clearly mentioned that, uh, the network service mesh would essentially set up the, uh, VX plan, uh, VX, uh, it should be tunnels. Uh, sorry, it should not be channels. So VX plan channel, uh, would be tunnel. Um, and then the parts would essentially, it can export L two or any other, depending upon their capability, uh, this can be taught after, uh, uh, sitting on our incident VX plan tunnel. So this would be essentially within the data center. And then the next, uh, is essentially the connection between the nodes to the top, the DC gateway. Again, uh, the assumption is that, uh, GRE tunnels would be set between the nodes and then the DC gateway. And then it's essentially the MPLS channel or the LSTs that run all the way from the DC gateway to this nodes. Um, so this is more like a point to point scenario. But, uh, one of the scenario can be, uh, uh, then distributed, uh, uh, uh, bridge concept, uh, which, uh, had, uh, brought and haven't, uh, populated it because I see some gaps in that, uh, I'm still working on it. So I'll populate it, uh, uh, later, uh, uh, later than next week. Um, and we can also probably discuss more about, uh, during the scenario discussion. So that's the update with respect to the, to the, of the, uh, use case document. Yeah. Cool. Excellent. So thank you. Uh, I appreciate that update. And it's, it's good that you went through and updated the sequence diagrams. And if you folks were more interested in the distributed, you know, distributed C and effort distributed bridge case, we can talk about that and sort of the conceptual review section later on. Now then, John, I think you also had updated some sequence diagrams in talking about, uh, was there some place you would, do you want to drive or would you like me to do something? Nope. I trust you're driving ed implicitly. You're doing a great job. So I copied, um, you know, friends, diagrams and got some input from him. So I did two sequence diagrams. One was more sort of the first one, which you're highlighting here, which is great, is bringing up, um, a new resource. And so the first thing is the pod says to network service mesh. I'm requiring a new, a new channel. It talks to API server talks to device plugin demon set as for resource triggers instantiation of ask for new network namespace in the pod from the device plugin, which then triggers the instantiation of a new security CNF. Oh, it could be any CNF. This is just for this case. I mean, it's anything. And then we inject the CNF into the container of the pod, which is then running. So you can imagine this being a security VNF. It could be a, um, VXLand gateway or VTAP or anything else we want to put in there. It gives us the ability to stick network resources into pods on demand. Does this make sense? I mean, I kind of went through this and I'm not sure I've got it right, but, you know, it makes a certain amount of sense because I mean, effectively what you're saying is that you have a use case here and you would sort of clear about the use case. You have a use case here where you would like to be able to, um, inject a new network namespace into the pod and inject a security container into the security CNF container into the pod. And that's sort of an interesting use case and there's a lot of interesting discussion you can have about who should be doing what, where, and when in this process. Yes. And I think you were pretty clear about that. This is certainly one way to look at how to do that. And, um, this is interesting. Do we have other folks on the call who are sort of interested in this inject a security CNF container into the pod use case? Because I would love to get commentary from people who have a similar use case. I seem to recall that that is an approach that IBM has taken with other projects in the past. And, uh, I believe that that's an approach that should, um, that should work. So it's definitely one that we should, uh, that we should take a look at. Yeah, I posted the code yesterday that thinger wrote up. There's a bunch of links to, um, a couple of repos that have a working example. It doesn't use network service mesh. So I thought I need to, um, work with, um, Kyle and, um, Fred about how to do this. Yeah. So I mean, that, that, that's awesome. I think that that's awesome. And I appreciate you putting the code up that helps everybody and your, your interest in working with Kyle and putting on this. Um, the one thing, and I know you're aware of this because we've known each other for a while. We're all there is there is a trade-off to having a CNF per pod or to say a CNF on a given node. And there are definitely times when that trade-off is worth it. Um, you know, so I think both use cases are going to be of interest. Yeah, I've, I've talked to a bunch of, um, customers and people and, you know, the attraction, the positive feedback is mainly around, it makes the CNFs atomic with the pod, which is a standard Kubernetes design pattern. I don't have to worry about, um, additional resources when I bring up a pod, it's the resources there with the pod. Oh, yeah. No, I can absolutely see the cases where that's an attractive way to do deployment. I think the, the negative side is it does make the pod a little more heavyweight. So it absolutely does. And depending on how much work you're looking to do, um, you're, you're, you're well aware, and I think a lot of people on the call are well aware of the, the trade-offs in terms of when you're really doing bit-danging data playing work, you know, moving all of that, you know, distributing all of that to a bunch of different CNFs versus putting it into a CNF that can kidnap some number of cores. But, you know, the truth of the matter is I absolutely see the use of this. Um, I just think we need to support both. And the second sequence diagram there is really, really very simplistic, but it's thinking about if I want an L3 management network, the top of my existing L3 Kubernetes network. I know there's a whole bunch of work being done in, um, Maltis and other things. And whether Maltis is the right way of doing it or whether network service mesh is the right way of doing it, I think is another interesting discussion because there is so many use cases as far as having this, I, I won't use the word overlay, Chris, just having this additional, I mean, it's additional parallel network that's used to, um, manage either network resources or other resources from security guys and from management guys. They like having this, this DevOps slash SecOps network that's separate from the the traffic network. Yeah. And I think I've also got something that, that I sort of put together on VPN gateways that moves in this direction as well. Because I, my suspicion is that often what is on the other side of that thing, but the management network is some kind of a VPN thing that you're being gated to. And, and so, I mean, this is sort of a really central point, which is, it is almost never the case that what you actually want is a network. It is almost always the case that what you want is a network service. You know, for example, in the case of a management interface, you don't really want it plugged into some blend linking subnet, because then you got to go manage that and figure out how to get what you really want in some out of bandway. What you really want is for it to connect to, for example, your VPN gateway service, right, which does all kinds of nice things for you, including backhauling you to various other places. So, though, this is good. This is good. I appreciate it. Thank you. Once again, it's fairly simplistic. I can actually put it together. So earliest morning, just following friends L2 Ovalay, I think it works the same way. It's just, I think we just need to think of how do we manage these multiple different resources, or multiple different meshes. The right word. Well, the thing is, I think part of what's helpful here is I've had some conversations with people and we all think of different levels of most comfortably at different levels of abstraction versus concreteness. And so at a very abstract level, exactly the same pattern everywhere, right? Concreteness is very helpful, running through examples and following the same pattern as it applies in different environments. So this is very helpful. Thank you. So if people can look at it and give comments, I'm more than happy to expand, update, change, modify, so et cetera. To help. Thank you. Awesome. So anything else on use cases at this particular moment on the use case document? Are there use cases that folks want to stand up and raise their hand to add? Okay, cool. Then getting back to the agenda, I think the next item we had up was meeting time planning. There had been a point raised that having a meeting on Friday is problematic for certain parts of the world because Friday is part of the weekend for certain portions of the world. And I think where we had left it was correct me if I'm wrong because I'm a little vague here was that Prem was going to send out another doodle and that Mike was going to find people for whom this was concretely a problem to speak up. Did either. Yes. So I have so I'll probably I'm creating a form. Sorry, I couldn't send the link to the group. So we're all volunteers here. So I appreciate all the work. Yeah. So so the form would essentially be I was looking to ask for the time zone as well as the the time slot that would be useful for that would be suitable for all. So with so I'll probably send it today. Many things I could be a Google form. Yeah. OK, that's cool. And Mike, you wanted to find folks who want to participate for whom this is concretely a problem. I've brought it somebody to look at the existing doodle poll and I see this person did not add anything to it. I will continue to try to raise awareness of this work and see what the interest is. Hi, Chris. We should talk some time. And that's all I got right now. All right. Thank you. I appreciate it. Awesome. So Ed, you being the captain of the ship. I want probably some time that will suit you so that I'll probably put that in the form, the Google form. So the time that suits you, the time you would be available. I overcharacterization of the fact that I want to find meeting. But yeah, it's it's challenging because I do know I believe we have participants from Europe already, as well as North America. And that means basically mornings in North America, at least time mornings. And most of the rest of those have been chewed up already by the other many collaborations. So it's a tricky thing always to find a time that suits everyone. Right. So I think I would probably yeah. One of the things I did want to raise, I know, George, that you have a bunch of folks in China who may want to participate. I want to make sure that if you've got concrete people there who want to participate, that we make sure we roll them into this consideration as well. Yeah, I'm still working on that. That's good. That's one of the alternate. As one other thing is, if it turns out that the time zone thing is too difficult, then what we can do is we could probably alternate. That's exactly what I was thinking. Yeah. Yeah, I was able to say the same. Yeah. I have definitely seen that work. I am as a purely personal matter, it's a miracle I make it to any regular weekly meetings. So alternating meetings confuses me, but I can get over it. So. You're having completely full with this meeting. Okay, cool. Awesome. So action planning for the coming week. So we already had a couple things called out for development work. Or is there anything else that folks want to add to sort of developer coding activity? I don't have anything other than what was, you know, what we discussed before. So, I mean, hopefully we can try to get the CRD patch in next week. Cool. I mean, on a purely personal I'd like people to take a look at the thing I've put out and maybe especially Frederick and Kyle and if there's any way we can integrate overlap, it would be. So I'm just going to pull it here. If you could actually put a link to that, call it to the code there. That would be massively helpful because when there's a, there's the, oh, wait, I remember someone said I should review this code. Where do I find the link thing that people do? Yes, I understand. It actually gets worse over time because we'll be asked to do ten different things and remembering any one thing. Yeah. Yeah, there's this lovely concept of psychology called for humans called the Herod factor, which I'm very fond of. It's the number of things that you can perceive a pound of without actually counting. So like if I threw three pennies down on a desk, most people can perceive that it's three. If I throw 13 pennies down on the desk, most people can't perceive that it's 13 without counting them. And I think most people can't manage more than their Herod factor of stuff going on at once. Cool. Awesome. So use case mapping. Prem. John. So yeah, I'll probably try to put in the distributed whatever that's applicable. I'll put and I'll probably put a separate section as gaps or to be decided. Cool. Anyone else? If I get feedback, I'll update things. Okay, thank you. Cool. Awesome. All right, so anything else on action planning for the coming week? Awesome. So for conceptual review, we've got a whole laundry list of choices here. You know, in particular, you know, everyone I think it's been through the the intro and there's the video there. There's stuff talking about how hardware interfaces work because it's fairly straightforward, but it helps to see it. The distributed bridge sort of distributed CNFs. I think there's probably a lot of interest in the VPN gateway case is one that came up this week in talking to Mike and then Mike was kind of enough to provide us with a bunch of questions and I provided a really I made an attempt to provide some answers to them in a way that could be referenced. Do folks have opinions as to how they would like to proceed? We've got about 25 minutes left. Don't all speak at once. One thing I want to probably it's a generic question, but probably thought if we can see if network mesh can fall. So if you look at the typical the overlay underlay concepts, so you have full mesh between the nodes. So that's a typical way to build it. But is there a way to optimize the whole VXLAN mesh? Just thoughts on how we can optimize because it's let's assume that you have like hundreds of nodes. Then imagine building a full mesh between these nodes. It will increase as the number of nodes increases. Any thoughts on optimizing that? I have some general ideas there. You know, I have some general ideas there. They're they're trade-offs, of course. It sounds like you'd kind of like to start with the distributed bridge domain stuff and then we can we can sort of jump from there. Does that sound? Sure. Okay, cool. And I do want to the very latest in the last five, ten minutes we go back to Mike's questions because he put effort into constructing them. And so trying to talk through them I think is important. I like to reward effort with actual feedback. So cool. So this is the distributed bridge domain deck. I don't remember how much of this I animated. Got to help us all. So this is literally there's a class of things where your container cognitive network functions are actually distributed across a bunch of different nodes. They aren't actually living in one central place. And it's a generic class of things. I tend to think abstractly. So I think about it as a generic class of things. But the truth is there's one that almost everybody's familiar with, which is distributed bridge domains. Right. And so I figured if I talk through that it will sort of show the pattern for how you would handle distributed CNS in general. I'm going to skip past the getting the most out of this presentation. This is in case I'm ever presenting it in video. People could find other decks. So let's look at what the actual problem is here. And I'm going to try and animate this. Got to help us all. Hang on a second. Can everyone see the presentation? Yep. Cool. So the general problem is you've got a bunch of pods on a bunch of nodes and they of course have their normal k8s networking in the normal way. They've got an interface for that. But you'd also like to be able to connect them to distributed bridge domains. Right. So some distributed bridge domain zero some distributed bridge domain one. Not everybody's connected to every bridge domain. Some people are connected to more than one bridge domain. But this is a sort of very common kind of problem that people have for a variety of reasons. And often you will implement this with the excellent for tunneling. But quite frankly, you know, that's not, you know, whatever works for whoever is providing the distributed bridge domain CNF. So to look at this from an NSM point of view, you start by defining a network service for your bridge domain zero. And then you deploy some set of pods or daemon sets that implement bridge, you know, bridge domain zero and you match across them using labels. You know, selectors on one side and labels on the other. And let's say just for the sake of argument, this is the full mesh case and we'll get back to your partial mesh as a moment. Prem, in the full mesh case, you just deploy a daemon set of VR zero pods across every node. That's the simple case. And then the question is, okay, how do you actually get hooked up if you're a pod who wants connection to distributed bridge domain zero? And it's fairly straightforward. You know, every on every node, there is a VR zero pod. It exposes a channel for that service to the NSM. Right. Then, you know, you update network service endpoints in the API server so people can find you. Then, you know, the pod makes a request for a connection to the NSM to the VR zero service and it requests that connection with some parameters that indicate that it prefers local affinity. In other words, please connect me to an implementation of this network service on the same node if at all possible. The NSM taking that into account and not already knowing that it has a local VR zero pod doesn't really have to consult the API server in that case. So it simply makes a request to the VR zero pod for the connection. Except connection. You inject the interface or memo for V host user, whatever into the VR zero pod for the pod that's seeking to connect. And then you inject on the other end, the memo, the host user, et cetera, into that pod and tell it that it actually has that connection. And at that point, you've got something going through data playing that with that interface pod zero, you know, the pod talks to the VR zero pod locally on your note. So I had a couple of questions here. So the first one, the exposed channel, let's assume that's the L2 channel, right? Now let's assume that the parts of different tenants are being hosted, right? So do you intend to see that you will create a bridge domain for each multi tenant for each tenant? Well, so in order to keep each bridge domain is an L2 service, right? The L2 service it provides is bridging for all the right to the same domain. So if I had two different tenants and they wanted two different bridge domains, those are two different network services. OK, makes sense. OK. You know, you could even you could even imagine the situation and I sort of showed that a little bit here. Originally where, you know, let's just say that the what the far left node is 10 at one, the far right node is 10 at two. And for some strange reason, they agree they should have something that talks to both bridge domains. That kind of thing is supportable here. You know, but if you just want strict separation, then you really have different bridge domains. Sure. And also you would also have a model that in the bridge domains are connected to a router so that they can talk where the router. That's entirely up to those bridge domains requesting. OK. If they want to go and step by step build themselves for, you know, with something of that nature, what you're really talking about is those bridge domains having a connection to a network service that does something for them, whether it's as simple or more complicated like a VPN gateway. You know, that that's their business, not ours. Sure. OK. Cool. Yeah, we don't need to reinvent the neutron model here at the high level. We just have to be able to support it for the people who really, really want it. So distributed bridge remaining. So you've got a bunch of nodes. They have BR zero pods. Effectively, those pods are responsible for standing up the VX tunnels between each other because that's how they provide the network service they want. So the NSM is actually not involved in this at all because it's just connecting pods to CNS. Now the tunnels could be normal Kubernetes over the normal Kubernetes networking. They could be over some other network service that's requested by the BR zero pods. Right. That's up to whoever is put that together. The distributed bridge together. And then the BR zero pods coordinate with each other using whatever mechanism the implementer of the BR zero pods decided they were using. It could be a controller. It could be use some other mechanism. You know, it could be an SDN. They could be talking BGP for EVPN. That's really the problem of the person who is deploying the distributed CNS. So the choice of that is outside of the NSM scope. Its whole purpose in life is to hook up pods to the network service that is BR zero. Now, one of the things I will point out here, this gets to your fully meshed versus non fully meshed case. Prem. In the event that say on node one, I had no BR zero pod either because I chose not to deploy one there or because for some reason it has had an accident. It is not present. And for some reason hasn't been respond. But if for whatever reason there's not a BR zero on node one, when a pod on node one request access to the BR zero network service, the NSM will naturally create some kind of a point to point link to one of the BR zero pods elsewhere. Make sense? Right. And so you'll basically there's a cost because that means that any bridging has to hairpin through wherever the BR zero pod is remotely. But it may be worth it to you to not deploy a BR zero on all of 100 hundreds of your nodes simply because, you know, that's expensive and to in some cases be backhauling. And in fact, if you wanted to because we have pod affinity available, where you can essentially say please deploy my pod near another pod, you could sort of put your thumb on the scale in terms of scheduling of your workload pods to encourage them to be on a node that has a BR zero pod running. But everything still works if you don't. And by the way, neither the pod requesting a connection to the BR zero network service nor the pod providing it actually know jack shit about whether or not that that connection is is local and remote or being backhauled to some other node. It's just not their business. Make sense? It. Cool. Do other folks have questions on this? I'm sure there's something I've not been terribly clear about or some corner that I forgotten. Does it make sense to folks? So I mean, it makes sense to me. I've been following the discussion here. Well, that's encouraging. You sort of have a lot of experience with distributed bridge domains in your past. Yeah, exactly. Exactly. I'm curious. I'm curious if anyone else, you know, if it's if, you know, I think it's like you said, is there any if it's not clear to someone else? You know, we should make sure it's clear to most of the people on the call, I think. Agreed. Did that help answer your question from? Yep. Absolutely. Yeah. I mean, it has, again, the nice convenience. Nobody knows or cares whether or not you've got a full mesh or not. That's not actually any of the pods involves business. Right. Yeah. And the advantage you get here is actually the advantage you get from Kubernetes is the part and you can probably schedule it on demand and yeah. So this makes it much more easier when you compare with a typical VM world or on Kubernetes. Yeah. And one thing I will point out, because I know we have some people in the broader community who want to do this. There's no reason that you have to have a pod for bridge domain. So if you're someone who's deploying something that is smart enough to handle many, many bridge domains, it's effectively just exposing many network services. And that can work too. Right. So you can always have something that is a, you know, a BR zero through N pod that exposes a bunch of bridge domains as separate network services. So, so I had a kind of I kind of get it. How do you connect applications into it? So if I have a worker node pod that wants to connect into the dispute bridge domain, how do I do that? Yeah. So I mean, that's very much this picture. You know, basically, and it works pretty much like anything you want to connect to this, you connect to it all. So if I've got a pod here on the right, you know, so first of all, the BR zero pod exposes the channel saying, OK, yeah, I'm I'm an endpoint for this network service for the BR zero network service. Then the part on the right is someone who wants to connect to the BR zero network service. Right. It makes a request. And as part of that request, because keep in mind for these requests, we can define our own gRPC. So we're in the process of doing that in the well in the coding activity. One of the things you probably want to be able to do is express a request for local affinity. In other words, please connect me to the instance of this on the same node if at all possible. So having expressed that, you know, request a connection to the service and request a local affinity preference, the NSM, knowing that it has something to satisfy the local affinity preference with goes ahead and does the normal setting up of a connection to the BR zero pod request a connection. The BR zero pod says sure. You inject an interface, you know, memory of the host user, et cetera, or that connection into the BR zero pod. The NSM injects the corresponding interface into the requesting pod and closes off the except connection. And at that point, you have an interface between the pod and the BR zero pod locally, like a V or something like that. Right. So how did you remotely to the BR zero pod was not on the same node as the pod? So if it's not on the same node as a pod, it looks like, let me go jump back quickly. It's a slightly different doc. It's remotely that it's like connecting to any other remote network service. Right. Well, hang on. We're listening to your thoughts. Clearly, lots of thoughts. So the generic case for any network service that is remote is roughly this one. So if I'm connecting to something that's remote, so if you just think in this picture that the pod on the left is your BR zero pod, it happens to be on a different node. You know, it exposes the endpoint. It's advertised. I'm going to request a connection. If you could imagine the pod saying, please, if at all possible, please, if local NSM realizes you can't do that. So he figures out where you can find a network service endpoint for BR zero service, request connection, and it looks very similar. It's just that you've got the peering between two NSMs. And the connections, what is the connection? So locally, the connection from the pod is to whatever your data plane is. Yeah. Local interface and then your data plane will have been provisioned by the NSM to do whatever it agreed with the remote NSM, whether that's VXL and GRE, whatever the fuck, right? Not our problem. They've come to some agreement between themselves in terms of what they prefer and what they support. Make sense? Yeah, let me think about it. I'm still trying to map it into Kubernetes. So. It sounds like what we're saying is that the pod requesting the BR zero connection doesn't. He's if he doesn't request local affinity, then he doesn't care where it is. The network service manager is up to him to figure out where which BR zero pod to do and in turn instead of requesting a direct BR zero question, building the tunnel through which that layer too is going to be tunneled. Like you say, whether it's GRE or VXL and or whatever. That's pretty much what we're saying. So the interface, the sequence of actions is pretty much the same between an NSM and a pod. Yeah, exactly. It's pretty much the same. You know, if the if you either don't request local affinity or the NSM simply can't provide it to you because nobody has run a BR zero pod locally, then, you know, it will automatically find some way to get you connected to that BR zero pod by, you know, finding a pure NSM and then programming whatever the agreed tunneling is via your data plane. Right. Yeah. And so effectively, either way, the pod gets a point to point tunnel to somebody who is providing the distributed bridge domain service. And then the distributed bridge domain figures out how it wants to handle the sort of point to multi point bridging behavior. I think it's a very, very succinct rephrasing of the idea at Tom. I appreciate it. The only difference is the packets will have less latency and perhaps better performance because they won't be but because the network service manager hopefully will be able to get the best quality service. We don't maybe sometime in the future, we'll have to talk about SLAs and all that. But this is sufficient to work. Yeah. I mean, the idea is, you know, you can't always get what you want. But if you try some times, you get what you need. So clearly, the ideal is to be able to get a local BR zero pod in this case, or at least from the pods point of view, that's the ideal. Maybe from the guy who's not wanting to burn hundreds of instances of BR zero where he may not need it, it may not be ideal. But for the pods point of view, that's ideal. And, you know, if we can do that for them, then we do. And if they can't get that, then they get to be something that provides the network service they asked for. Make sense? Cool. So I'd actually like to drop back. We've got about seven minutes left and Mike was kind enough to provide a bunch of really good questions. And I made an attempt to answer them. And I think sort of like the thing that really jumped out at me, Mike, was the who is agreeing with whom about what. And so I sort of phrased the deck that way. But you asked a lot of sort of granular questions in there that were also very helpful. Shall we dive into that? OK, sure. Cool. So I didn't know about this until the meeting started. So I've only had a chance to skim it. In fairness, it's only existed for about 30 minutes before the meeting started. So so, you know, I had no expectation you would have seen it. This was literally me literally me going to do the agenda, realizing that we're good questions and wanting to try and provide good answers. OK, so I have a bit of a reaction based again on just the quick skim that I've done so far. And it is, you know, we. That it's kind of interesting. You you managed to not show either of the agreements that that I teased out of you. So. OK, OK, I'm sorry. I didn't click into the question you want to answer. It's revealing, right? Because it's telling me that you're focusing on different issues than that I think need to be explained up front and at the top of the presentation about this whole idea. So the two agreements right that we talked about was the agreement between the application level container of containers, right? So we talked about the example of a web server and a web client. There's an agreement between them, which is that they are communicating via TCP, essentially. And and that they and there's a local agreement. They say there's so little difference. I'll just take them off, right? There's a local agreement between each of those bits of application and the kernel. We focused on the web server case who has agreed with the kernel that the server is going to talk to the kernel in terms of listen and accept. And then there is the agreement across the network between the web server and the web client, which is that they're going to talk to each other with TCP. And so let me try to map that into what you did. Here's the thing. Here's the thing I'll suggest to you. A lot of those agreements are actually not agreements that either the app or the client gives a shit about. They're they're just there. They're the way things happen to be done. I would actually suggest that the agreement between the web server and the web client is that if the web client sends a stream of bytes towards the web server, the web server will interpret them in some particular way and send a stream of stream of bytes back to it. The fact that they happen to be using TCP is sort of kind of incidental. It just happens to be the way that you send streams of bytes between two entities. So I would say that's actually important. The payload is what matters, not the underlay. In this case, the underlay would be the TCP soccer. So I would say this is a matter of layers. So or maybe as scopes. So the agreement between the application code and the kernel is a little more like what you said. They use the kernel's interface for streams over the network. But there is another agreement, I think, between the network peers that is important here. Because these things are not developed by the same shop. And they're not deployed or operated by the same shop. And so different organizations across the world agree that web servers and web clients are going to talk to each other via TCP. It is very specific that it is TCP that they're going to talk to each other by over the network. That's actually not so true. I mean, we have a lot of web traffic right now that happens over quick, for example. The TCP incidental underlay that's negotiated between the kernels is not actually in any way, shape, or form the concern of the web server or the web app of the web client. Neither of those actually care. Again, yes or no, I would agree with that as a statement about the code in the client. However, there is somebody who has chosen to run this code and expect it to have some effect on the wider world and I would say that as ground, I would say that is grounded in TCP and HTTP. So it works just as well over quick. So the world is not as simple as it was in 1990. Sorry, I should say maybe 1995. Right. So there has been evolution and there is a problem of how do you introduce new protocols? Right. There have been several runs at introducing new protocols. We have speedy, we have HTTP 2, we have HTTPS, we have TLS, all sorts of... All of those are actually protocols that live on top of the byte string. They're not protocols that live under the byte string. So they're not playing at the level of TCP. They're playing at the byte string level. Right. Right. So I guess you have to tell me, I know we're out of time, so let me try it this way. There are multiple agreements in the world. Okay. There are a bunch of people who have agreed that we're going to leverage TCP and the available evolution facilities built into these... There are a bunch of protocols that have agreed that we're going to use... No, I no, I think it's... No, it's important to understand it's not just the kernels that have agreed. It's the people that chose to run this application code on this kernel with the expectation that other people out there in the world, by looking them up their IP address up in DNS, can reach their application code over TCP. And yes, there's some other people who are using Qwik. That's an additional agreement. But there's a bunch of people who have agreed on TCP and HTTP and the various elaborations of that. But not because they actually care about it, because it happens to be there and it happens to be something that's universally available and universally used. The important point is that it's an agreement because there are a bunch of people who have agreed on it. The important distinction here is that in the case of network services, there is literally no universally accepted agreement for how we move an IP packet, how we tunnel an IP packet or how we tunnel an Ethernet frame or an MPLS frame between two places. There are a million and one answers to that question. None of them are actually agreed. And it turns out that we, if we focus on the payload, we only have to agree linkwise between the people who are handling the underlay carriage. And by abstracting that away, we don't actually have to get agreement at all between the pods about how we have this conversation. I think that's just a restatement of the existing old world, right? That's network services V1. Slap a V on it. I do apologize. I have a hard stop at the top of the hour. You make sure that we follow up. I'd be happy to talk to you either offline or in the meeting next week. Either way, because I think this is a very interesting conversation. It has been marvelous. I will see you guys next week. Same time. Cool. Bye.