 stuck the link back in for the meeting that's um also wanted to remind everyone the call was recorded from the time the meeting starts and automatically posted to youtube one of the amazing things that i discovered was the videos are not automatically posted there's actually someone sitting there who actually drags the video on so we really need to send that person like flowers or something absolutely i think i know who it is let let let me see what i can do for kubecton all right and so um you were engaged outside of nsm yesterday during the call and i kind of brought some stuff up with frederick um diving back into glossary today i kind of want to rip the bandaid off and really have a discussion on like what the data plane is um when i talk to frederick i feel like i'm kind of in line with you know what he says that nsm is almost like a controller of controllers um it seems like there's some weirdness from like what the demon does um because it seems like right now the data plane concept is very kubernetes centric and then it gets super super weird when we start moving into like some of um romkeys use cases etc um because if i'm living in a vim i'm not actually talking directly to the data plane at this point at least from like some of the examples and some of the use case documents that i've been looking at it it seems like in this instance from nsm's perspective the data plane is really neutron it's making api calls making requests getting stuff instantiated very well quickly so i mean part of the thing is and i may have not sufficiently emphasized this is do you remember in the deep dive talk where i made this point about talking about the abstract architecture and then the architecture specific to kubernetes um a big part of why i did that is because there are certain aspects of the system as currently written in code that are pretty specific to kubernetes so for example as you pointed out the the data plane that we run in kubernetes is quite dissimilar from the data plane that we might choose to run in some other environment right um at the end of the day what it really comes down to is um something has to stand up the plumbing for the point to point connections um and the the other thing that that's very interesting about data planes is a concept when you think about them and this is where i think we've gotten hung up in the past is um i don't know how many folks have ever had the privilege of actually working on building a real physical router switch but you end up with some kind of a control uh module and it has what it believes is the control plane because it's on top and it goes and it talks to something below it that it thinks is the data plane the something below it actually is the control plane that then farms it out to a bunch of line cards and it thinks of itself as the control plane and the line cards is the data plane when you get to the line cards there is a control plane level of the line cards that's actually being talked to it thinks of itself as the control plane and it pushes things further down into the data plane on the line card and if you're really having a fun time, the line card has daughter cards and the whole dance repeats again. So my experience has been that the most productive definition of a data plane is one of two things, right, used sadly interchangeably. But the first definition would be to say that a data plane is that thing which you talk to when you would like the handling of packets to happen in some way, right? So it may be that internal to that thing it's talking further down the line as in the example that I gave, but effectively from your point of view, you happen to sit in the system, that thing is the data plane because it's the thing you ask to do things that involve handling packets. That's one way of defining data plane. The other way of defining data plane, which is often less useful, but is still valid, is to say the data plane is the thing that actually touches the packets. Now, it's interesting to note that neutron is absolutely not the data plane in the second definition. In fact, very few things are actually the data plane in the second definition, which is why its scope of utility is less. I just, okay, so let me, everything you say makes perfect sense to me and let me kind of then express my concerns with like this distinction between these two types of concepts is in the Kubernetes world, an NSE seems to be very tightly coupled to the data plane concept. And I mean, we're not going to try to make neutron an NSE, are we? Because an NSE tends to be directly in the second definition of a data plane where there is flow of packets going in and out of an entity. So in this case, it would be a VM that neutron is provisioning for you. But in the Kubernetes space, there's also like a, there's like a fracturing of how we treat Kubernetes versus everything else. Say I want to go into the PNF space. I can go into the PNF space by either having the demon call a controller, such as ODL. And in this instance, from an SPS perspective, ODL is my data plane, right? From like the first definition that you gave me. But then I can also have an instance where I'm taking some of that general purpose compute on an NCS box or a Juniper MX box or whatever, hosting like a container that's running our demon. And now that's the data plane. And it's making requests directly to the box maybe via its APIs, right? So we're getting into like this weird scenario. And it's making me a little bit nervous because Frederick was talking about his interactions at Mobile World Congress. There's starting to be these third parties that are going to be interested in building these NSEs. And it seems like there's going to be a very disconjointed way because it doesn't seem like everything's treated quote unquote the same, which is kind of what we're trying to normalize here with this glossary is like, this is a data plane. This is an NSE. This is an NSE. And the way that it's set up now, depending on like what space I want to work in, it seems like we're running the risk of having very disjointed like implementations of an NSE, a data plane, et cetera, et cetera. So I'm just kind of curious why Kubernetes isn't treated the same where like I just call the Kubernetes APIs and ask for stuff other than the fact that the namespace injection is weird. So there's a lot of impact there. So the first thing is when you mention about Neutron, so like why could Neutron not be an NSE? Because if you think about what service Neutron provides, it provides things like a bridge domain or a subnet. And those actually false square into being a service in the definition of how a network service looks at things. I didn't say that it wouldn't be an NFV though, right? I said that it's not going to have any traffic flow through it at any point in time, unless you're going to try to like make an abstract of like the Neutron router, which is its own little Linux namespace, et cetera. But I'm just saying though, right, like, am I going to pack Neutron into an NSE so that I can treat it and call it a certain way? Let's actually back up and look at that concrete example because I think it's a really interesting one, right? So this is part of why I always start with the definition of a network service, always, right? So Neutron is providing a network service, full stop, right? That's what Neutron does. And so you could imagine having a pod running in Kubernetes or even some other thing running someplace that would like to consume the network service that is a Neutron network, right? So you want a specific Neutron network that you want an L2L3 connection to. And so at that point, you need to basically look at, okay, how do I connect that client to something that will provide me with that network service, right? Now, if you look at the Neutron case, what that's going to inevitably look like is provisioning on a V-switch somewhere, a VXLAN or other sort of port that plums into the bridge domain that is Neutron, right? And so if you really want to put a fine pack on it, you can say the network service endpoint that you are actually connecting the workload to is actually a particular V-switch that is part of providing that Neutron network that you're providing with say a VXLAN connection is the point-to-point connection there. And that's how you would treat Neutron as a network service. And then when you go track down the concrete thing you're talking to for that, it's going to be about some V-switch running on some particular host node for Neutron that is the way that you get it into the system. Does that make sense? Yep. Now, one of the things that may be useful, because I get your point about this, this sounds different in lots of places. So picture this conceptually as a tree, right? So if you start on a particular network service mesh as an abstract thing, because then you can start at the abstract and drill down to the concrete, and if instead of looking side by side to your siblings, you look up to your parent node and then how that reflects down to the siblings, it's a much more familiar game. Does that make sense? Yeah, no, I mean everything you say makes sense to me. It's just like what we're describing, we do it completely different in the Kubernetes space, like instead of just going through Kubernetes and demanding more things from Kubernetes, we're kind of like supplementing some of its functionality, I feel like. Well, effectively, this is because we can, right? The unfortunate problem for OpenSack for the better part of half a decade has been you're trapped in Neutron, whether Neutron meets your needs or not, right? And so effectively, in a VM, you have two options basically that you can play in OpenSack, right? You can either plug into a Neutron network as a network service or if you're a VM running in OpenSack, you can as that VM simply act as both the network service endpoint and the network service manager for yourself and expose the remote API and go register yourself at which point Neutron is a happenstance in that path and you've completely bypassed what's going on in the networking for OpenSack. Did that help at all? Well, I mean, yeah, I understand it. It doesn't make me feel any better. It's going to like, when I look at like my network and like me trying to implement this, it's going to raise like lots of challenges with the lack of consistency and it's going to make people less inclined to want to use Kubernetes, in my opinion, like I don't know, it's just weird like. Consistency is at the wire. That's the trick, right? So if you have a wire and you want to plug the wire from your workload into a Neutron network, that's fine. You've plugged a wire into a bridge domain that has a bunch of other things going on in it because that's the thing you wanted and there may be other things plugged into that bridge domain that you're interested in. If you have a wire and you want to plug it into a particular VM, because that VM is doing the thing that you care about, then you need to be able to plug that wire directly into that particular VM. The consistency is at the fact that we're dealing with virtual wires, not virtual bridge domains, right? So I think like a confusion that we're going to render that I see in lots of places is we have a ton of people who spent an entire time thinking only in terms of L2 network segments on the open stack side. And those segments don't even exist on the Kubernetes side. And it turns out they're a terrible abstraction for most of what you want to do. Like I said, the thing that makes it a challenge for me in the Kubernetes, because I'm way less worried about the open stack space and recreating the problems, it's just how in certain instances, NSM wants to kind of bring the data plane with it versus me having my infrastructure out there with predefined data planes already in place. And we won't even call it a data plane. We'll just say the method that I'm in a forward pack, it's like the actual flow of something making decisions saying forwarded out of this physical or this virtual interface. Like NSM with a lot of these NSCs, a lot of times, and I don't think that VPP is the default data plane, but like VPP tends to get packaged into a container, which also acts as an NSC, et cetera, et cetera. And I don't understand like why NSM isn't just like, as I put up containers that maybe doesn't have any forwarding element in it at all. Like it's an NSC, but like maybe it's just Quagga or GoBGP with no forwarding element. And all I want is to look at ASNs and push things with different headers, et cetera. And I want to rely on a completely separated data plane sitting underneath in the host. It seems like the methodology, and maybe it's just because this is the only examples I've seen that I would have to pair something like VPP or OBS or something in that container with Quagga. So that way it's pushing and pulling packets in and out of this network stack versus it just consuming the data plane that sits underneath. So not necessarily at all. So let's see the simple example of a BGP route reflector. Let's say you want to run a BGP route reflector. If you're running a BGP route reflector, effectively what happens is you've got TCP connections coming in and TCP connections going out. You're not actually programming a data plane. You're just doing reflection of the routes that you receive as a BGP route reflector. That's not a super different, quite frankly, from running a web server. I know that on the telco side they like to talk about those things as VNFs, but they're really, from a cloud data point of view, they're just another application that talks TCP. Does that make sense? But the place where life gets interesting is when you start touching packets. That's where things get interesting and different than just saying, oh yeah, just take FRR or a Quagga and stick it in a container and roll it out like any other pod, just like a web server or a medicines or anything else. It's when you actually need to start hauling packets across virtual wires because they're actually being processed by things. Because in the case of a BGP route reflector, it's not actually dealing with L2 or L3 payloads. It's talking about them, but it never deals with them. It's all an L4 and above kind of game. So I don't have to deliver L2 frames or L3 packets to a BGP route reflector. I just have to be able to get TCP streams to and from it. Does that help make sense at all? Yeah. And as to sort of bringing the data plan along, NSM is only bringing a data plan to the places where the existing data plan is deficient for the purpose. And please note, deficient for the purpose is this purpose. It's not all purposes. The Kubernetes stuff is super good for the kind of application things that people normally want to write on it. It's really good at those things. But if the thing you want to do is to take a packet and move it through a collection of CNFs, the BGP data plan, the Kubernetes system was never designed for that. And the data plan that it has is incredibly ill-suited to that purpose. And so all NSM does is say, okay, great, we have a situation where we need dynamism, meaning we need to be able to have wires come and go, not just at the beginning of the lifecycle of the system, and where we need to think in terms of wires, not in terms of entire cluster-wide L3 reachability. And with all the other things that that happened as part of Kubernetes as a network service, which Kubernetes networking is brilliant if you're writing an application. It's exactly what you want. But if you're writing a CNF, as many people have discovered, it's not great. Likewise, if you're running in OpenStack and the neutron networking is doing the things you want, then probably you just want to be allowed other workloads to connect to a neutron network. And that's great. But if neutron networks has nothing to do with what you're actually trying to accomplish, if what you really have is a packet handling VNF that is sitting in a VM that happens to be plugged into a neutron network, the neutron network is just your only option for getting out of that VNF. It's not really serving you. And so in that case, you probably want your VNF to expose itself as an NSE directly with a remote API so that it could accept incoming connections that may come in as tunnels of various sorts, et cetera, so that it can get the wires from the people it's talking to and deal with them. Another good example is if I have a physical network that does SRV6, as an example, I'm going to throw this out for you, Daniel. That data plane is actually entirely capable of doing things to the moral equivalent of virtual wires. You just have to ask it the right way. Hello? So can we say that the data plane is, there's a connection, and we can maybe define the connection as maybe some type of acknowledgement, and then some type of sending of packets, and then there's the packet treatment portion of the data plane, which is some type of manipulation, changing headers, whatever. It seems like there's two pieces that compose and make the data. Isn't that the network service though? Because I mean, I'm saying too, that's why I'm a little confused. Right, because in the neutron example, it's not providing you like, well, here's the thing. It is providing an SM a connection, or ODL talking to PNFs is providing a connection, but it isn't actually quote unquote in the literal since providing said connection, it is just actuating it, right? Like, I don't know, the sneaky network people's heads are going to explode. Maybe because of the explanation of how neutron works. For example, right now, in any kind of solutions, we have like this, we have the kind of SDM control plane that comes in, whether it's tungsten, whether it's ODL, even neutron, the weight work is defining as you do GRE, you do VCEN or you do those things, you're bound to what it thinks it should be doing. So that means that if I want to run, for example, tungsten, I have to do the boxes that can actually talk with tungsten to make it do the overlay correctly to the node. If I had a service plane and a control plane, which is agnostic, I have this, I need to connect router A to compute our communities node B. And for me, I decided it's going to use I don't just VLAN, then I will use the, I will define in that NSM mesh the proper data point I want to use for this. If I'm other company, but if I decided I'm going to use IP sector nodes, then I can actually make that work. I'm not bound to one telling me, because I decided religious is going to be GRE and everybody needs to be GRE, which is right now the problems we always have. You want to build something, but because you, you can have forced to design your network based on what the software vendor decided to be doing versus going the other way around, be able to create a mesh and decide underneath what kind of protocols I need to make be working with because of the my end network. All of us have MPLS networks, which is the first burden we got is we need to find a gateway. We're going to centralize, we've built everything like flat out across networks. We remove summarization people because we want to be able to do MPLS, but we're jamming every traffic onto one box that needs to be translating one data plane to another because it's the only way to work with the virtual world, which is to me, it was our biggest challenge with the lab. So if I'm able to do with NSM, that abstraction that say I have a physical box, I decided it's going to be, I'm going to use that kind of tunneling or that kind of overly or packet measure in packet encapsulation to go to wherever I need to be. I think that that's why I think it's a valuable point, a valuable solution or vision. Maybe I rented them. So if I rented, please. I'm just happy to have people who are not me ranting. I'm definitely over ranting. I mean, I kind of get into the point that I was trying to make Daniel, which is effectively you're in this world where you're held prisoner by whatever the SDN control I thought it was because he wants to own the world and you want a world where you look, I've got these different things and I want to collect together with wires and I will decide how the hell the world works. So I don't have to go revamp my entire damn network to hack around the limitations that I'm getting from my virtualization SDN provider. So let me ask you this question then. Are we going to, in some cases, just completely bypass Neutron and just manually configure the virtual switch and add ports to a VNF? Because I mean, like you said, it's, I mean, I get everything Daniel's saying, but like, if you're calling just Neutron, then you, regardless of whether or not we do an intent above it, it's still limited to what Neutron does and doesn't want to do, right? And like what ML2s are and aren't in place? So are we going to go in and say, we're just going to bypass Neutron, go into the stem and make direct port connections, similar to the concept of directly interjecting interfaces into namespaces and pods? Or are we only going to work through Neutron? But if you look at the history of all the SDN controls that came out, tungsten, even the New Age of this, well, everything, first off, they kind of bypass Neutron. They are just the way of knowing what Neutron was doing. And afterwards, you needed to go to their APIs to make the nice VPN stuff work, because Neutron was not able to do so. And over time, they added the modules within OpenStack to be able to do it. So doing it the same way now means maybe I should have stayed this way because, and the cool things with CRDs and things like this, I can still make it happen and make it look like an umbrella of APIs, which are standardized, but I don't have to recode everything to make it work, which is one of the big challenges we've had. I mean, I agree partially, but ML2 has two different types of drivers, right? And it's usually a specific type of driver that's saying, from an orchestration standpoint, I am outsourcing all of this from Neutron to tungsten to VTS to ODL, whatever, right? And then I'm going to bring in my own agents, et cetera. But I mean, it still doesn't change the fact that there's specific device type ML2 plug-ins inside of Neutron that enables the ability to do VLANs, to do VXLAN, to do GRE. What typically was happening in my experience the last time I put my head over the wall was that what would occur would be that you would get effectively what people did is you would simply redefine what all the words in Neutron meant, right? So you would go and write an ML2 plug-in and you would use all the Neutron API, but what semantically was meant by those words was completely different for you. And a lot of folks did that kind of stuff, but again, look at the domain of control. Now you're turning over your entire Neutron network and everything that happens in your Neutron cluster to a particular opinion about the world. What we do here that's quite different is that we essentially allow you to connect the pieces together so there can be multiple opinions about the world going on simultaneously, it's not our problem. Now as to the question of whether we ever bypass Neutron and OpenStack, I honestly, you know, that's a question if someone wants to go to the investigation for how that might be doable and doing that work. I don't honestly know what the appetite is in the OpenStack world for that. I know that OpenStack has historically taken actions to prevent anything that is Neutron from happening, that is not Neutron from happening with networking. You know, maybe their opinions will be different now, but effectively from a broader network service mesh point of view, all we need is something that talks the network service mesh, GRPC APIs, and we're good to go. So if somebody were to go and do something like this for Neutron, where it basically will do the direct wires into VMs in Neutron, like that would be something we would be okay attaching to because it's not something that, you know, that differs from the way the world looks to us. The world still looks like somebody advertised the network service endpoint. We can reach out to someone who could help us connect to it and that someone ensures that that into the connection happens. Did I actually answer any of your question, Jeffrey? I feel like I may have walked around it in circles. Yeah, I don't know. I'm just going to have to keep driving ahead and seeing how things materialize, I guess. One major thing that I think is going to have to happen for people to adopt network service mesh, and I don't think it'll be difficult, but it'll be a change, rather, is that the view that most people have of the networking world is usually centered around a bridge domain or centered around a VNF or some bigger thing that's occurring. What we're doing is we're actually saying, well, let's not center the world around that. Let's center the world around the individual wires themselves and focus on making all of those things work. I know we can work out how the airport's any type of services or other things that you want to attach to it. There's going to be a change in how people think about networking as they start to use network service mesh. I think it's something that'll come naturally. The people here in this conversation are already going through that transition to a significant degree. I think as we start to expand out, we'll see more of that happen, but I think part of what we need to do is work out how do we help people reorient their mindsets to no longer be core-centric on a bridge domain or subnet and rather to be more connection oriented. Why oriented? Well, you bring that up, and Watson has even in a comment, like he said, we need a connection for, a definition for connection. I'm just saying, the problem is, and this is where my evil sneaky networking person is going to come out, the lack of consistency between when and when there isn't a forwarding element because that does matter. If we say we're going to move into the CNF space, then almost all of your developers are going to be sneaky networking people. That lack of consistency between some use cases, bringing a forwarding element, some use cases not bringing a forwarding element, et cetera, et cetera. It makes my head hurt and it makes my heart sad. I don't know. Maybe like Watson says there, we need a definition for what a connection is, but there's not enough consistency for me. My theoretical mind is exploding. We got the word virtual wire. We definitely need a definition for a connection. I think to Frederick's comment about thinking, part of what I think we have to realize is that the cloud native people have already completely abandoned the notion of a bridge domain. There is no L2 concept in Kubernetes at all. That concept does not exist and they don't want it. So the world has already moved on in the application space for bridge domain. We just have to sort of catch up. As to the definition of a connection, I think that probably would have helped a lot. Here's my only thing on that, Ed. I know that like Kubernetes doesn't have this concept developed too, but it doesn't change the fact that there is a virtual switch or a pass-through technology with something doing like something is looking up back addresses and making decisions on what ports to forward things out of. Just because Kubernetes wants to ignore all that, and I'm completely supportive of the idea that we continue to abstract that from them, it doesn't change the fact that there is something that is looking at MAC addresses and saying, guess what? You're going to go out this port because that's what ARP told me. That doesn't go away just because Kubernetes ignores it. Actually not. So if you look at the most popular CNI driver today, which is Calico, Calico is a pure L3. There is no L2 domain there at all. It's all L3. But that's one CNI. It's the most popular CNI today. It probably is. 95% of the Kubernetes traffic out there. No, and I get it. And there is literally like another senior architect in my company who is trying to convince us that we should do no L2 anywhere. We should put an ASN per data center and do BGP with L3 all the way down to everything. And I get it. If I'm going to do any L2 stuff in NSM because I have this giant commercial network where all we do is sell private circuits to people, all L2 via VPLS or EVPN. Some variant of MPLS is going on there. Then maybe we just say Kubernetes doesn't fit in this space then, I guess. Well, the other thing that you actually should probably look at is go look at how people who actually run massively scalable data centers do it, right? And I'll put you to one public example you can go look at. If you go in GCP today, it will have a slash 32. Period. Full stop. Right. So you get what you get there is a slash 32 because everybody running a massively scalable data center has long since realized that you want to move L3 as close to the edge as you possibly can. And many of them move all the way down to the VMs. So here I'm seeing two types of world, right? One is what we are trying to say is we are trying a pure green field environment where every workload is basically cloud native and then Kubernetes tried to solve the problem, right? But back in the service provider world, we are also seeing a bit of hybrid where you would have physical network function, virtual network function, and so does the cloud native, right? Now, the most important thing is I think we probably need to if we get into the service provider world, we probably need to get a mechanism or a way where we can say, okay, here is the non cloud native world. Here is what I would probably help you to define the entity's until you migrate to the cloud native world. And probably we may need to put definitions or some steps or exercise that would essentially help embrace the non cloud native world, right? So I Absolutely. And here's the baseline quick, which is if you want to bridge domain, if an L2C makes you happy, that's a network service dude. It's fine. You're absolutely right. But only thing is what we are trying to do is, okay, so let me tell you one other thing. In cloud native world, you can in fact have apps or have any of it without the data plane or without the data plane playing any important role. It can just be an IPC or an RPC between endpoints. And then the data plane programming does not play an important role, right? It's just L3 endpoints everywhere and then do it. But when we come to that of service provider, you have MPLS endpoints, you have those endpoints which are not really L3, but still you need to address it. So, I mean, I sincerely feel that probably we would need to put in that effort and exercise in order to define those definitions or try to do a mapper type of thing, so that what we can do is we can essentially go and tell to customers or tell to some of the folks, Hey, I like I know that you are using Neutron world, but in Neutron world, I know that you're talking about bridge domain, I know you're talking about this, but then if you want to migrate it, this is the step or if you want to address it, something on those lines, right? But it, I mean, what I'm feeling is either we can continue to do here or we can in fact have a separate discussion where in people who are interested in trying to bridge the gap between Neutron or physical world and then try to do it, right? Because I really love the way we are driving NSM, but only thing is, I mean, we don't want to really, I mean, I should put it in a different way. We should probably be focusing on what is right for the pure cloud native world, but also try to get the rest of the world to be part of the journey. That's what I'm completely with you. So my suggestion would be the following, which is in terms of the legacy world, if I have a world where I am, I'm like Neutron and in OpenStack, where I've got pure L2 segment, right? An L2 network that I'm doing. You can absolutely connect workloads to that L2 network as a network service, right? So if you go and run a pod in Kubernetes, you want it connected to your OpenStack Neutron network, that Neutron network is a network service, we should be able to connect you to that as a network service. It will have payload type L2 in a way you know you're going. If you want to move further than that, say I actually want something. Sorry, can I request others to go on? There's a lot of background noise. There we go. So I mean, you can absolutely do that. I think that's actually exactly the right first step because as I think Jeffrey's pointed out, there's a lot of legacy in the world and it will be a lot of legacy for a long time. And so I think that first step of okay, great, how do I offer my Neutron networks as network services to the broader universe, some of which is going to be cloud-native in Kubernetes, I think that's an important first step. It's just not the last step. Yes, but one thing I think what I strongly feel is also we would need to seriously look at how do we address the data plane elements, right? When we say data plane elements, where is the plumbing happening? Whether the plumbing is enough? Aspects like that. Yeah, I mean, one thing to keep in mind is the plumbing is very distributed and federated here as opposed to being centrally controlled, generally speaking. Right. So if I'm running a Kubernetes cluster over here, and in that Kubernetes cluster I have a pod, then I'm probably talking to something that looks roughly like the VPP agent data plane right now, even though it may or may not be driven by VPP agent, right? And it just produces cross-connects. If I am over on the open stack side, and I am exposing Neutron networks as a network service, I'm probably something that talks the NSM remote API and that basically picks a V switch instance and sticks a VXLan in point into that instance and configures it, even though there's no VM on the other end and passes that back as the remote end of the connection to whoever talked to me. I don't have to have a single unified data plane for the whole world. This is, I think, part of where we're actually in a post-SDN world. SDN basically presumed that you had a single entity that controlled all things. And we're an NSM, we're in a world where that doesn't have to be true, which means that we can actually collaborate with all the things, which was not possible in the SDN, I must control all the things, model. Well, that's useful information for the, for the glossary and probably. Yeah. So I think at the end of the day, part of what it comes down to is something has to arrange for the connection to whatever is providing the network service. And it may be different some things at different points in the architecture, right? So it may be a data, you know, a data plane on a Kubernetes node where the pod is running, but it may also be something that is twiddling V switches in OpenStack to expose what OpenStack would think of as a port. Okay. Can I have one other suggestion here, please? So what we can do is we can probably create a separate section and then say, these are the mappings, right? And then try to keep this specific to the cloud native world, right? So just a suggestion, right? An alternative to that would be if in the definition, if you could say this is sometimes called X in the NFE world or the yes, for each of those definitions, but some type of mapping. Right. Yep. So let me ask a question. I keep going back to talking about this sort of tree approach and saying there's the definition of things in the abstract and then there's the particular ways that the institute in the broader world. Would it be helpful to sort of build out that tree in the glossary documentation we're talking about? I think so. Yeah. I tend to think abstract to concrete personally. Well, that's not true. I think both. But I know that people have different tastes. I know lots of very smart people who think from the concrete up to the abstract. And I know lots of very smart people who think from the abstract down to the concrete. And I know a few very smart people who start in the middle and think outward. Those guys are really interesting to talk to. But, you know, bye. I think we just need more definitions. And I think to Watson's point, we need to unpack some of these things that are just too complicated to get under a single bold little statement there. Right. Like, I mean, in the NSM context, a data plane is just anything that you can request to forward packets, whether there's a layer of abstraction between you and that forwarding plane or not. Like NSM is going to go and say, Neutron, give me this. Kubernetes, give me this. Kubernetes, you don't give me what I need. So I'm going to go around you and make a request for a kernel interface on my own. But unpacking those things and then talking about separating and making it granular enough so it's obvious that, okay, this is at the highest level how the developer is going to look at this. And then here's these sub-considerations for the developers who are going to be writing network service endpoints and writing network services themselves. Because I mean, I know we keep saying we want to, like, appeal to the application development community, but Sarah's going to write her app and then request a network service by name, right? Like that's how it works. Well, who wrote the network service that's sitting behind that namespace? Like, I'm sorry, that domain name, like it wasn't Sarah, more than likely. I guarantee you, because she's not going to know that I should do MIMF on, you know, a host-to-host connection. And then if I go out, like in this space, I've got VXlan in my network. So that's going to be the next one down. Like, it's going to be a sneaky network person that writes really hideous code that ends up writing those network services and those network service endpoints. Yeah, no, no. I completely get that. I completely get that. So, you know, and those views will be somewhat different. But effectively, at the end of the day, I can tell you, having talked to a lot of people right now who struggle with a lot of these problems in VFs, where they're trying to cobble together things from L2 networks, and it doesn't always make sense. And you end up with these extremely complicated top-down configuration scenarios. I think once it sinks in to the people who are writing the network service endpoints, I think it's going to be an incredible relief. No, I agree. I'm just saying, as the person who will end up consuming all of those network service endpoints, I'd want there to be some level of consistency. Yep, yep. Well, and the, yeah, and I think part of that may actually be breaking down roles and situations because the roles and situations are going to vary somewhat, depending on the environment you're in. Even though you're trying to do the same thing, you may do it a slightly different way. So, for example, if you ask me to go and write a CNF to do something, I would go out and write a CNF that consumes MIF and call it a day, right? Because that's super easy and simple, and it actually performs super well. If you asked me to go and write you a VNF that participates in the network service mesh, probably I would write that, what I would do is I would have to keep the remote API, I would have it register itself, and I would have it terminate at some tunnels because the things that make it super nice and easy to write a MIF-based CNF simply don't exist in neutron. And so that's probably what I would choose to do in that circumstance. In both cases, you have something that is terminating, that is advertising, that it provides a network service, and that is accepting requests for connections to that network service from various clients. Just in the latter case, the mechanisms you would use would be some variety of tunnel types, as opposed to using nice convenient MIF and letting the infrastructure take care of the tunnel types for you. No, I'm tracking. I'm more worried about the SARA example where I've got my application and I've got a firewall CNF and a VPN gateway CNF, and I get them from two different vendors and they've decided to just completely implement said CNFs in a completely different manner. Well, so the thing is, if you're talking about CNFs, as long as it doesn't actually matter if they've implemented them in a different manner, because they're exposing network service API for how connections are requested, and they are accepting a payload of some type. That's literally all they get to have towards the outside world. That doesn't mean it's going to be an optimal negotiation though, right? Well, it may not be an optimal negotiation, but it'll be as optimal as those two can communicate with each other. So there is also a potential responsibility of the operator who's providing the services on behalf of SARA to make sure that he or she validates and ensures that the components, if there are any performance requirements or so on, match the needs of the user as well. So it doesn't eliminate or obviate the need for those choices to be made, but it does make it easier so that when you make those choices that those items can get wired up based upon the negotiation. And has a nice premise as well where, suppose if you have like a MIF to MIF connection for most of them, but suppose you run out of capacity on your local system, then you don't just say, okay, we're fail the connection. It also gives you the ability to negotiate something that's less optimal, but still gets you that thing that you need. So to my view is SARA should not know anything about this. That's one, the next step is about how to make sure like, how can an app developer, which we had the problems before, know, oh, I need a VPN connection and need this and need that kind of network service. They normally don't know. So that's the second, I think it's a future step is how to make that kind of intent simpler for an app developer who wants to consume network services to make it simple for her to ask this. The second one is you won't be able to normal, you won't be able to say everybody needs to develop the same thing because VNF developer in Israel might decide it's not going to code the same thing. He decides to use VO's doesn't want to use an MIF. So you need to have the NSM be able to adapt to it. So that entry points might be going from a VO's and then the second phase needs to be going to MIF. You need an SM to be able to adapt to it. And that's, that's the same burden we've had with the NS. And we've been fighting for this for five years now. So I don't think NSM will completely solve the way people badly or goodly in a good way create their code. I think that's overall us having to stick and beating them up. Yeah, exactly to the point. I mean part of it also is if you're if you're writing a scene off, you can, you can do, there is a very small number of dumb things that are left to you that you can do, right? They exist. So I could be a no, no, no, like, like, part of good design is limiting the amount of stupid available to your consumers. Right? Exactly. That's what those definitions like put some guardrails in place. So these goofballs, like, just going to go off into right field designing something rando, like, we can be like, look, you didn't follow like what the definition says this use cases. So this is, this is crazy town. Yeah, absolutely. And so like, for example, you know, let's look at the number of things that you could do as a CNF vendor that would be stupid, right? So if you look at the system today, things I could do as a CNF vendor that would be stupid. I could hard code the name of the next network service into my CNF as a hard coded string. That would be not particularly bright. I could also hard code the name of the network service that is being exposed by my CNF into the code. Both of those would be super not good. My favorite is my favorite is receive a lot of and accept a lot of new connections over kernel interfaces. So you end up with an explosion of kernel interfaces and you run out of how many you have available. So I was thinking, yes, so definitely. So that the using really, really bad choices of mechanisms is also one, but kernel interfaces are so slow that anybody who's writing a VNF, the CNF that uses kernel interfaces isn't even going to get to the point of evaluation. Right. Because it's going to be how much traffic in your CNF pass dot, dot, dot less than 1% of what you need. So again, part of the whole point is to limit the number of things that can be done super badly in the system. So part of that is limiting the number of knobs that are externally facing. And so for example, one of the things that you may have noticed about network service mesh is we literally don't say anything about how you would configure a CNF because that's a very broad space. But we do say this is how you would have people request connections to your CNF and we're orienting this towards wires and payloads. And I think that's a super helpful way to approach the problem because it limits the amount of stuff at the edge when you have different CNFs chained together that they can screw up. As long as I can basically tell you as a CNF, please advertise what you're doing with this name and please, you know, this logical next step that you want, please consume it using this name and these labels. As long as I have those knobs to turn, there's not a lot you can actually do to screw up interoperability between CNFs and the network service mesh world. Sub-definitions within the data plan and maybe we like tab those in or whatever. Because if I'm Sarah, all I care about is that top definition, this logical construct that provides me connections, right? But then for the people that are writing NSC's, writing network service, network services themselves, et cetera, we need to like make sure that they know definitively like when you're considering your forwarding element, you have these considerations and maybe the forwarding element doesn't even exist as part of this network service because you're actually going to call something else that does it for you. You're just looking for forwarding, right? Like we don't need to define it right this second because we're pretty close to the top of the hour, but we need to give people as little rope to hang themselves as possible through these definitions. So when they come in, they have a clear and concise understanding of what they should be coding. Or if you guys think I'm crazy, I'm open to that suggestion as well. I don't think I'm crazy. I do think I'm bad at operating a mute button. You're bad at what? But actually Jeff, I mean, bad at operating a mute button. But I believe, Jeff, you brought in a very, very important point, I believe. I think this has pivoted into a clear distinction of what would a cloud native be and how would the legacy or the NS world would be. And I think this is very, very important in my opinion. Right. Like a roadmap of how you take your legacy stuff into the cloud native world. So my personal goal for this glossary is that they can take this glossary and equalize SDK and start working on things, right? Like hopefully they do more research than that, but at a bare minimum, they know it like the definition of an individual component in this space is, and then they can use equalize SDK to like hack around with some stuff. And we give them some guardrails via said definitions to keep them from getting themselves into trouble. Quick question though, the SDK is currently goal-rank only. Do you think that there should be an example of how to use it from other languages or even a SDK for C, for example, or Python? I don't know. Well, so for SDKs for the language, great. For C, C has an annoying problem in that C has good protobuf support, but nobody has yet written a decent ACDB to library for C. So like client or server library for C. So GRPC tends not to be supported in C. It is supported in C++. It is supported in virtually every other language on earth. But so C is a little minor niggle. But I think if you look at what the SDK is doing, a lot of what it's doing is primarily helping you out around the actual GRPC calls themselves. Multi-language. Well, I would be perfectly delighted if people were to start writing SDKs for other languages that would help them out in other languages. I think the real fundamental API is what's represented in the GRPC. Cool. All right. So I think now that we have a client and endpoint defined, I think we largely have what a service is defined. And we know that the data plane is what it is, and it just needs to be articulated. Next week, we can get away from some of these more. I don't know what to call these type of calls, but just trying to like puzzle and inform and maybe just get down to like, I mean, I don't feel like a network service registry is going to be too controversial and defining. So try to knock out a bunch of these definitions next week and get as much stuff done. So that way Ed and Frederick and friends can start refining this document and looking to push it out. Sound good? Is there any other topics that people like from a definition standpoint want to just really dive deep on next week before we start just trying to get a lot of the busy work out of the way? I suggest creating an audience, defining the different audiences and having a name that everyone can reference. And then that can be used either in sections or directly on each one of the plus returns, but defining all the audiences and giving them a name will be easier for discussions. Okay, I'll put that in the meeting notes that we need to do that next week. Oh, you already did. Thank you, anonymous bat. That would be me, Taylor. You don't certainly get to know Riley like feeling to the entire experience. No, but I mean, I feel like the people that attend this call are getting a better understanding of what's actually going on. And I think we're getting pretty close to now where we can just like, you know, the network service manager domain, we just need to go into the spec, pull out the information, put it into two or three sentences and drop it in here. I think we're getting pretty close on this document. Yeah, I mean, by the way, like, I think understanding closing all directions, I certainly feel like I understand quite a bit better the places that and better ways to try and phrase the thoughts that I'm trying to express to be more comprehensible. Yeah, I mean, Ed, my big fear is I tend to understand usually right out the gate what you're trying to convey. I have a hard time articulating your thoughts, though, to people that sit in my building. So I'm just trying to figure out how I put the language around all of this, you know? No, no, that's absolutely fair. And articulating things is actually useful. And having easily replicable articulation is super important, because much as I enjoy explaining things to people, I cannot do that all the time. All right, friends. Well, I'll see you maybe on Friday's call, and it's not definitely next week.