 All right, we can get started. So for those who haven't been here, it's Telecom user group. And it makes on alternating times on the first Monday of the month. This is our later in the morning time. Next month, it'll be earlier. And if you would like to talk about anything, you can add an item to the meeting notes. It's been posted in the Zoom chat. And your name is an attendee as well. So we just finished ONES. We have an LFN technical virtual conference coming up in the 13th. And then KubeCon after that, schedule's been announced. Does anyone have anything interesting that they want to add that was from ONES? Or I see Gergay and Tom and a bunch of people on here that have come but just point out some interesting things that happened at ONES from last week. Hey, Jim. Gergay. Yeah, very difficult to remember any highlights yet. I still did not process my notes. But I remember instead, there were these two sessions in parallel. One is the TAG B of session. And the other session was the panel with the, I don't know what was the title, Cloud Native Warlords 1. And I ended up in the Warlords 1. And we had quite an interesting discussion in the chat of the panel about the different problems what we have with the intersection in the intersection of Cloud Native and telecommunications. Yeah, that was a good panel. I'm hoping that we can do more of those. It's hard, I guess, even with the recording, trying to make the recording and then the cue at any being live is good. But I guess that becomes a discussion either way. That the, unfortunately, what was the birds of a feather for the Tocco Music Group, the Zoom links, didn't work. Or they were not clickable, I should say. But that one, we're going to have an updated version at the KubeCon. We were trying to make it a live session so it would be interactive, similar to the panel. I guess I'll point out another one. On the Monday, we put together a, I guess, it'd be a workshop versus a tutorial where we had six different CNCF projects, one after another giving a condensed or dense introduction demo and focused on how to use those projects with either focused on telecom or Edge and had those back to back. So if folks haven't seen those, then as soon as the recordings are available, I'd recommend checking them out because it was, you get six CNCF projects all one after another. We're hoping to continue following up with those projects on how to collaborate and get feedback into the different communities and projects, other projects. I know just from some of the feedback across the groups, the people that were presenting, that they were hearing staff on how they could interoperate. That was new. Does anyone have anything else they want to mention or talk about from ONES? I have a question about the birds of a feather session. Were there any new participants, any new parties who are interested? Only a few people made it on. You had to get into the session to be there alive. You had to get to a PDF that was attached and had a Zoom link, which then you had to type in manually. When that was noticed, then a bunch of links, the session had already started, but I know a few people started posting the link and the Slack channels, but I think by then it was too late and people shifted to other talks. So I did notice a few names I hadn't seen, but there weren't too many, unfortunately. Okay, that's a good learning for next time, yes. Yeah, for sure. It would be a good option for this conference to support live sessions. I think for Pana is... Zoom, and unfortunately it was just problematic to have the two platforms working together. And I don't know if anyone else caught this, but I think it was Monday of last week, either end of day Monday or maybe Tuesday morning. Zoom made a security change that caused problems across the board to ongoing sessions, including I think some of the CNTT, password requirements and waiting rooms and stuff like that. So that probably caused some problems. Also, that was our first experiment in Expo, the platform being used there. And there were a number of parts of it that were kind of clunky for lack of a better word and didn't quite work as expected. So, we're trying to experiment with a number of platforms to make these virtual events more engaging. We're aware that with so many virtual events, there's a bit of virtual event fatigue sitting in. So we're doing what we can to try and make them as close to live events as possible. And this was an experiment with the in Expo platform. And yeah, so we did have a number of things that just didn't go quite as we wanted it to for sure. I appreciate you all trying. I think the public Slack channel for ONES worked pretty well. And there was a lot of ongoing conversations that went even to multiple days over topics from different talks. So that was really good to see. And it'd be nice to make sure those continue past that time period and don't get lost. There's a lot of good content. Totally agree. There's a question and Zoom chat from Irvin's campus. Beyond high level discussions on CNF, is it right topic discuss which particular hardware acceleration technology are employed by members? For example, DBDK or AF XDP and how better to stitch them with containers? All right. I don't think that's a good topic and question. We can just add that to the agenda. I know, sorry, I'm just wanting to ask because is it too obvious? And that's why I just ask a bit because we are working for that for quite a lot. But if it's not a topic for agenda, so you can disregard it, it was just a question because it's beyond the high level of discussion. So just to ask my D, just to make it, because we are a little bit too, I would say, just living on the high lofts and just a question for, I don't know, making hands dirty and some kind of technical discussion. But if it's not a right topic for agenda, please disregard it. I'm not interested. No, that's, I think that's a great question. So I've added it into the agenda. We can come back to that. So is there any other items on events that anyone wants to mention? And I mentioned the upcoming ones as well, LFIN and KubeCon. So they, I'll give a quick update on the CNF conformance test suite. And we have a pretty large group here. So for those that aren't familiar with it, CNCF has, I guess I can just say, has three initiatives. One of them would be this group, the talk of music group. There's a CNF test bed, which is a whole tool chain framework and stuff for working with technology solutions and it deploys the packet. If you're interested, go check out CNF test bed. And then the CNF conformance is a test suite similar to, you'd think of it similar to the Kubernetes conformance test suite. So, or they, maybe the E to E test. It's actually more similar to the Sonoboy side, but it's as far as the configuration, how it's set up. And its goal is to provide a way to test cloud native principles and properties for both CNF's application side and the platform pieces beyond what core Kubernetes E to E tests and conformance are covering. So this gets, could get into items like how do you provide hardware acceleration that the question was just asked. How do you provide it in a way that services can consume it? And you could think towards Kubernetes native as far as implementation with the driver for underlying cloud native principles. And open source test suite, anyone can run it. We've recently updated to, it started with workspace and then platforms were added. Platforms were running as a, you would designate that you're gonna run the platform test separately. Now the workspace, the workload testing or CNF testing has been moved under a whole workload section as far as the namespace and everything. So you can either run workloads, the workload test, the entire piece or you can run categories within the individual tests. And you can also be able to run the whole thing if you wanna test a platform with specific applications that are running on it. And most of the new test focus has been on adding platform test. So this could be stuff like what happens when a node reboots or dies and that sort of thing. And there's been a lot of updates on the usability side from the getting set up based on feedback over the last month. Some of that's to make it easier to quickly get started. Some of it's on feedback if you're a developer and trying to run the test and make changes so that it gives you the feedback you want or change different levels of logging and get a lot more feedback. And then trying to look at what CNTT's needs are for the requirement side of workload platform, working with the badging side, so certification and ideally the CNF conformance will be one of the layers for a checkbox for telecoms to say, okay, it meets these needs, it passes its certified Kubernetes on the platform. It also does these things on platform add-ons like CNI, CSIs, and that's where the goals are. So we're taking feedback there. And on that note, we recently got integrated with CNTT for the testing and it's now running for workloads specifically. Platforms were still being not released yet on the integration that we were using, but it's now running in the CNTT OpenFeed Funk test and those are running. So that's the initial integration and we're gonna continue to go down that path to get feedback and make sure it works in a way that helps those end users, but anyone else can run it directly as well. Does anyone have any questions and comments or anything about CNF conformance test suite? And if you're talking, I can't hear anything, it's muted, otherwise we'll move on. So we had a couple of items on the agenda. I know one person wasn't able to join us this morning so their talk is gonna, we'll bump that to an upcoming meeting. And I think Sadeep was gonna talk with us about 5G CNF deployment models. Is he on the call Sadeep? Not, so it looks like we may need to defer that one. So we have an ODM introduction and then this CNF model, we'll bump those both to a future meeting. All right, we can go to Irvin's question. And this may turn out to be something that would be nice like in a survey, there's been surveys in the past and I know CNTT had a survey just several weeks back for some focused staff on new releases and direction. But we can try to see what feedback we can get in this. So I don't know if y'all can see the question here in the meeting, I guess I'll share mine. So the question is what particular hardware acceleration technologies people are employing and then maybe it's a further discussion on how to use them within containers or you'd say not just containers maybe but containers and Kubernetes and how would we do these in a different way or is there a better wiser than what happens? Yeah, I think of this just how to say we are just looking for some, I don't know, advices or opinions because like we, I think everybody knows DPTK but there is also this address family XDP so XDP or like embedded BPF. So like the both like technologies are like bypassing the kernel in terms for the dealing with the network stack. So just might be, yeah, that I don't know like might be some kind of suggestions from the other members, just what they are preferring or what they are they use because it would be quite interesting to also understand like what are their, I don't know opinions about the technologies. I could answer maybe. So let's say in fact words like I don't know further like it with the containers and Kubernetes. So can you hear me at all? Yeah, go ahead. Sorry, so I would answer briefly in two ways for things we're doing at Red Hat. So EBPF as a platform is really maturing quickly. There's a lot of interest in it from various aspects and very recent versions of the Linux kernel have really improved it in many ways. And yeah, that's definitely a very exciting path. I'll also mention smart NICs. It really depends on your use case and what you're trying to do. But a smart NIC, some sort of FPGA, you can offload a lot of your networking work. You can offload, for example, OVS open virtual switch and really layer OVM on top of that and really create a whole data plane based on smart NICs. It's challenging to integrate this into Kubernetes networking seamlessly. It's definitely not trivial, but these are two areas that we're working on at Red Hat. So there's a third one. This is a couple from Jennifer. So in addition to DPDK and smart NIC, there is another one, which is SRIOV. That's becoming quite predominant as well. And in the smart NIC, again, there is another one, which is the GPUs. So those are the ones which are predominantly being looked at by most of the telcos. Yeah, I understood. Now we're mostly are focusing on the public cloud and the particular, there's like, if we're talking, for example, if we are like some deployment environment is AWS and then of course there is like, they have, I would say they're supporting with this enhanced networking adapter. They're supporting DPDK and XDP. So I just wanted to gather some kind of insights. My visit members have done something is it worthwhile on which one to how to say to shift, whether it's a DPDK or AF6-DDP or they are overlapping each other. And then at the end, it will be one technology at the end. So there is another one, which is a competitor of AWS. It's a smaller company, Stackpad. They are geared towards fully cloud native deployments across the globe and that's geared towards Kubernetes, 100% Kubernetes and they're using fully SRIOV underneath. So if you wanna deploy Kubernetes and virtual machines, they give you 25 GB throughput in your VMs. And that's more than adequate for most of the application deployment. So like I said, these are the three very predominant ones. I'll mention quickly that SRIOV has a CNI plugin that was actually contributed by members of my team. And as long as you have the hardware, if you have a hardware that can support it, you can really, I think that's a very quick way to start getting into high performance networking in the lab. Yeah. Yes, there are actually several SRIOV CNI plugins. Yeah, sorry for plugging my own plugin, my team's plug. Of course, yes, that's true. And this is Anand from NEC. So we have tested a couple of CNFs. So many implementations are using SRIOV with DPDK like SRIVVF, then a DPDK or VPP inside the CNF to achieve this. XTP, what we have seen is we could not find a vendor that supports XTP in the port level. Like there is an XTP driver that bypasses or the net filter level, which gives a throughput similar to DPDK, but for a communication between port to port, which does not go via the network card. So we evaluated a couple of CNIs and they said these accelerations today are not available. But most of the illumination today what we are seeing with SRIOV is SRIV plus DPDK or VPP. And like Redad mentioned, this is SRIV CNI, which is available in OpenShift Maltes. And some of our customers are using this at scale. Do you know, can you tell me what's different between the OpenShift version and the Intel version? So what's the additional value of using the OpenShift SIOV CNI? I can't give a complete answer. Other than saying that it's just very well integrated, OpenShift does come with Maltes, so which is fully supported. So if you want to add an extra SRIOV interface, you can manage that. I can't say much about installation and infrastructure. I do not know enough. Okay. Do you have support both Melanox and Intel cards? I know that Melanox, those SmartNICs are on the roadmap. Some support is already there. It really depends what you want to support, right? There's a lot you can do with SmartNICs. Very, very flexible. So I know that on the roadmap is to fully integrate OVN and OVS via SmartNICs. So that is on the roadmap for OpenShift. I don't know exactly when. But it's not for, but now we are talking about the SRIOV CNI. Oh, the SRIOV specifically. Sorry. Yes. Well, you mean Melanox SRIOV? Yes. Oh, okay. Sorry. I'm deep inside Melanox SmartNICs these days. I don't know for sure which ones. SRIOV is a pretty standard protocol. I don't know if the differences are that important or which hardware are certified exactly, but I imagine that the major ones. I can get you a list if you're interested. Yeah, to add to that some of the Melanox makes us a different OCP 4.5 SRIOV. So what providers allow you to control more of the network and use stuff like SmartNICs and SRV and accelerated networking? I think one of them that was mentioned, if I heard it right, was StackPath. Yes, that's correct. That's correct. Taylor, yeah. Okay. Who else besides StackPath? And I guess I could throw out packet. I mentioned them earlier. Does anyone know of any others? Would we put OpenShift in there? Since we're talking about being able to use... If you start listing products, you have to add your own product also. You have to add what? All product also. Well, I guess I'm thinking, I know that some cloud providers, you're not gonna have control over what would essentially be the underlay. So at packet, you could actually set a player to between machines. And then unless I'm off and it's changed recently, some of the cloud provider, so actually naming, you can do an overlay, but you're not gonna be able to set up the layer too. Seemed to be asking about public clouds, right? Yeah, public clouds for sure. So OpenShift would not be included. I know OpenShift runs on at Microsoft, but I don't know if they actually provide the hardware for any of this. All right. So maybe Azure question mark? Not sure. What's happening with maybe on that and tying in with usage, CNTT or maybe I should say the Elephant Lab for doing testing for the new Kubernetes CNTT implementation. This, I haven't seen the latest on that. I know that there was discussions with various places. No, I don't have the latest info. You don't have it? Okay. See, Gurgay, B1, yeah. It's Tom on here. Tom, do you know on the RI and testing side? Sorry, what was the question? Apologies. Just wondering, I guess this tied in a little bit with, it's not fully open, but what lab is being used for CNTT for RI? I know that Elephant and Jimmy may have input if you're still on. Yeah, it's mainly been driven by the Kubriff project in OPNFV at the moment. And it's been deployed into, I think, an Intel pod within the OPNFV labs. Yeah, yeah, it's currently running on the OPNFV Intel Community Lab. Oh, hey, Michael. Hey, I think there might be a couple of other deployments as well happening and George can add a little bit of detail on that one. Yeah, well, hi there. Basically, it's true. So this is a bare metal lab in the OPNFV context and they're basically three different labs or pods we have available right now to hosted by Ericsson and your Intel lab as well. So from that perspective, yeah, it's a bare metal thing, so you have full control over everything because you deploy everything. But it's also not a public cloud environment, but it's really like a bare metal lab. Sorry, I turned it later. I don't know what the question really is about, but that is what it currently, what I can say, so if it's about SROV and all that, that's configurable. We don't have smart links in the lab though at the moment. Okay, that's interesting. I get to know. So the whole topic was mainly about what are people using and they're talking about hardware acceleration and that kind of goes into what's available technology-wise and then where can you use those things? Of course, in a production internal, you can do whatever you want. It's nice to know what's out there for collaborating. Even, I would call the CNTT maybe almost a hybrid. It's not fully public, but if you're interested in joining CNTT and collaborating, then you'd end up with access while you're working with people. So that makes it where folks who want to know how something looks like maybe if someone was wanting to see how smartnecks or something else would work with the reference implementation and put that forward and maybe collaborate. All right. Does anyone have anything else or want to talk about on this before we move on, let Tal talk about his topic there? And I guess just to mention it one more time on like open or where things can deploy. CNFTestbed deploys the packet and there's a lot of people deploying there. So if you are looking for an open cloud, it works well if you want more control including more direct access to the hardware all the way from boot and bias. Open lab. So if you want all your own hardware, then that goes down a different path. All right. Let's hear about Knap. Is that how you say that? Or Knap? I think I pronounce it Knap or Knap sometimes. Knap is actually a word. So... Do you want to share your screen? Yeah, that'll be a great idea. Thanks. All right. So I will... Here we go. So I'll take about 10 minutes for this and I'm not gonna do a demo. I actually, this is kind of an ad hoc. I thought Alex Vuhl would be presenting today, but when I saw him removed, I thought I would quickly put myself on the agenda. So let me explain the problem. I think you'll understand it pretty well. The problem is... No, it won't be here. Oh, examples. All right. The problem is this. If you've ever used Multis, you know that you need a lot of knowledge to configure the networks. So expand this a bit. I'm giving an example here of a very trivial or straightforward use of Multis. And the idea here, I have two. So I have two deployments. First one is attached to network A, second one is attached to two networks. So as you may know already, there is a special annotation, a CNCF annotation that activates Multis. Multis will know how to look for this and attach an extra interface for this network. And then similarly, we have another deployment here that has two networks, explicit A and explicit B. And now these names here are names for custom resources called network attachment definitions. And these are very simple CRDs that all they do is include the configuration for CNI. So you give the CNI plugin. Here I'm just giving an example with a simple bridge. And pretty straightforward, right? But actually very, very difficult to manage at scale. I think as anybody who's tried to use Multis knows, in order to write this configuration, you have to be a system administrator or at least have access to system administrator information, not just for the cluster, but even for the particular host on which this deployment and its pods will eventually be running on. Because you need to know which technologies are available. If you're using, for example, SRIOV, right? You need to know what hardware, SRIOV hardware is available. If you are configuring IPAM, you have to know who else is configuring IPAM, right? It could be somebody not even in your namespace, some other workload that would need to make sure, for example, that there wouldn't be IP range conflicts. So in this case, since I wrote this, I can be sure that, for example, I have this subnet and this subnet. There's just one comment. So I think that's a fundamental design error in Multis, that the network administration and the attachment of networks to pods is kind of meshed together. Okay, I will tend to, so I'll say this, I think my answer, yes, you can say it's a design flaw, or you could say that Multis is designed to solve one very, very specific problem, but it's only half of the problem. So I would call what I'm presenting today, Multis part two, sometimes I call it. So Multis solves this very specific problem of attaching CNI perfectly, I think. But then, or not perfectly, it also has gaps. But yes, absolutely, this is what I'm trying to show today that this is a big, big problem. A major problem. In fact, if you're coming from the world of OpenStack, you're used to having a network as a service. You have Neutron, and Neutron, of course, has many limitations. I don't think we want something identical to Neutron for Kubernetes because Neutron assumes overlay networks, right? It already assumes that you can create any subnet you want, and it makes sure that you get it. And that's not always the use case we want in Kubernetes and definitely not in Telco. But we do need some way to manage these. And that's the project that I'm really showing you today. So I called it KNAP. And I'll say this is a POC. I think what I'm trying to do here is open up this discussion. And I don't want to be the only one who providing solution here, but I want to, it seems that some of you already know that there's a big gap here. And I have many other ideas on how to solve this too, but this was my first stab. And the idea was to use the operator pattern. So KNAP is an operator. And what it does, I go back to the example. Here's the example, the same example we just saw, but using KNAP resources instead. So the kind is network. So this is a new kind of custom resource. But you'll see there's no CNI configuration here. Instead, I'm specifying a provider. And I say that I want a bridge provider. And then similarly for the deployments, I just use a different kind of annotation. And again, I give it a name, but in this case, I'm not talking about a network attachment definition, but I'm actually talking about these networks. So if I actually did a live demo right now, you'll see that the end result will be the same. The deployment will work in the same way. You're gonna have those CRDs basically, sorry, those network attachment definitions created for you automatically by the operator through the provider. So a pretty simple solution, right? Obviously that all the knowledge has to be in the provider. The provider has to know what to do and how to provide these networks. So it's provisioning them and also deprovisioning them too, right? So for example, if there's a pool of subnets, the provider will know how to give it and unprovide it, return it to the pool if it's no longer in use. If it's SRIOV, you have a very specific limited number of resources, of course. So you'll have some sort of provider running and knowing what resources are available, maybe by introspecting the node, something like that. So this is just a quick explanation of the basic idea. And I think it's very powerful actually, because the idea here is that now I can design workloads that use multis, but I don't have to know anything about the system administration stuff. That's offloaded to this provider. The secret of course is making these providers, right? So I'll go over some of the... So that kind of explains the rationale of what's happening here. The provider is kind of interesting. And I wanna explain how the providers here work. So they work through a system that I call extra thick plugins. And some of you who know a little bit more about multis and about CNI plugins and multis, we talk sometimes and in CNI generally we talk about two kinds of plugins. On the one hand, there are thin plugins which are just one shot executables that run, right? That's kind of how CNI works. It's a command line interface. So you give it standard in, it gives you standard out, very, very straightforward. And you can have a CNI plugin that's designed just like that. It runs, it does what it needs to do and when it finishes it quits. We also talk in CNI about thick plugins. So yes, you run a one shot CLI interface but you have some sort of service. Maybe it's a system deservice. Maybe it's a Docker image running somewhere. Maybe it's something even external to the cluster that's running and provisioning networking for you if it's some sort of SDN solution, for example. Those we call thick plugins because they're not just one shot, there's something running all the time. What I'm doing here with NAP, I'm calling it extra thick plugins. And what I mean by that is that these plugins are running actually as pods within the cluster itself. So that provider, that bridge provider that we signed the demo example is actually running as a pod. Now, why is this interesting? You can also call them I think cloud native plugins if you like because they're actually native within the cluster itself. The advantage of doing something like that is that you can have a network function actually work as a provider. So in an interesting way, you can have a network function, say a CNF running on Kubernetes, which uses NAP here to get, for example, extra interfaces for a data plane if it's SRIOV or something like that, SDN connections to remote sites, SD1. But also because it is a network function, it might be providing network services that it can do for other pods. So a pod could be both a consumer of multis here and also a provider for multis. So that was kind of the architectural decision here in terms of these thick or cloud native plugins. I'll mention quickly a big disadvantage of the solution. Some of you might have already identified it, but if you, multis can only work during the initialization of pods. So if the pod is already running, you cannot dynamically change the interfaces. It's not a feature that multis supports right now and generally Kubernetes doesn't. Of course, you like to think that your pods are very lightweight. So they'll just be restarted with a new interface information if there's there, something there. So right now the way NAP works, you always see all the pods coming up and then they're restarted after the CRD is created. That could be okay and that could be a fatal flaw. So I've really considered another way of solving this is not via the operator pattern, but instead actually being an extra layer in front of multis. So if multis is an extra layer, a kind of multiplexer for CNI, you can have another solution before that that would be make sure to provide those CNI configurations for multis or it could be integrated into multis. As the question was here before, if multis, if this is considered something that multis should be doing, it could be enhanced in that way. But that would be really growing that particular project. So I just wanted very quickly and with this I'll finish my little presentation here to talk about this specific question. Is this kind of neutron for Kubernetes? I would say that a little bit it is. The idea here is to give the same kind of ease of use that we have in an open stack also for Kubernetes. And by ease of use, I mean that developers are able to create workloads without having that administrative information. So it's not so much a matter of difficulty, it's a matter of impossibility. If you do not have that information, you can simply not write those workloads in advance. You would have to create some other system to do that. And I don't know people do that sometimes with Helm, sometimes with other kinds of rendering before the deployment, but you end up having to create your own deployment system to make this work. And I think this removes that requirement. But I also at the same time want to say I the idea is to do here something that's really cloud native and not to duplicate neutron, not to create exactly networks as a service. It's more about network attachment definitions as a service. But this is the point where I really want to stop and open this up. This is me giving one-shot entry into this problem, but I really imagine that a lot of you have other ideas or want to discuss this. So I'll stop here and open it up if there's interest. Yeah, so I think that's a very, very important, let's say thing to separate the network administration and tenant network creation. And I think that's something that we do not have in QBATS and we might need like some kind of an API to manage networks. And I had some similar ideas, but my approach or my idea was to create CRD definitions for the API and to use different controllers to let's say implement different support for backends like the same API could be used, could be used more to sort them or network service mesh or whatever. Networking solution the infrastructure has. But this is I think pretty similar to that in some sense and partly address is the same problem. Well, let's join forces. I would love, as I said, I'm not married to my idea. This was an attempt and as I said, it does have certain disadvantages. And also the real meat here is the providers and how they work. So a demo using here, the bridge provider is relatively trivial. And the way my bridge provider works, it just saves a pool of subnets to a file. So it can manage it and make sure to synchronize on that file to make sure that everybody gets their own unique subnet and then they can return the subnet back. But obviously for more complicated networking solutions, there's much more work to be done during provisioning. For example, you might need to configure a PNF somewhere using NetConf. So the magic really is in those providers, I think. And if there's a standard API for that, that of course would make things easier for everybody. But yeah, if there's interest, I would love to continue offline and see if there's some way we can collaborate to solving what I think is a very big problem that we have with networking. Yes, okay, let's do that. I'm open for collaboration. I'm trying to figure out from who I can get from Nokia to actively participate in this. And when I'm done, I will send out some emails, let's say, because I have a list of interesting interested participants also from the last OpenDef conference where we had a very similar discussion and we had the conclusion that we should have some kind of an agreement on a networking API for Kubernetes. And also we kind of agreed that this should be done as something out of three because the Kubernetes networking seek is not really interested in these more advanced networking problems. Yeah, definitely, anything could start out. I was saying that there's an interest but they're looking much longer term. So how do you deal with something now versus something that may take a lot longer before the design comes out? You've been in the plumbing groups. You know, there's discussions. It's just the, they're looking at a much longer term. Yeah, exactly. And even Moltis was very painful to get where it is right now. There was a lot of resistance to that. It kind of looks like a temporary solution, right? So I'm just gonna go ahead. I was just gonna, this is a, I'm Ryan Tidwell from SUSE. I was just gonna mention here. I'm interested and open to collaborating on this. I've got a lot of background with Neutron been a contributor there for quite some time, kind of moving into Kubernetes space. And I'll just say that the problem that you mentioned here is one that on the surface is pretty obvious to me as well, you know, with Neutron. I think maybe we have an API that's very focused on infrastructure and those sorts of objects where with Kubernetes, maybe we swing the pendulum a little too far and wave our hands around things. And what you're describing here seems to kind of be aiming for that middle ground. And I'm interested in an open to collaboration with this as well. Wonderful. Yeah, I'll say, you know, my focus is on orchestration. I care a lot about the underlying technologies, of course, but I think we all agree that Kubernetes is not as mature as the legacy clouds in terms of managing these resources at scale. I think the scheduling paradigm that Kubernetes has introduced is very scalable in itself, but I think it caught us all off guard in terms of adapting our systems to it. I think this is part of the importance of this tug, right? That we can really discuss these challenges and see how we get there. So wonderful. Yeah, I'll love to continue this conversation. One more guy jumping on board here. Hi, Tel. I'd also like to, well, I don't like to restate the importance of this. I think that was mentioned already a couple of times, but yeah, George from Ericsson, I think that I'd like to be part of that as well. I think I should be able to find your contact somewhere, I guess, so that we can continue like all of us, of course. And some other context to discuss this, right? Let's just start the list in the minutes with the interested parties. Sure. Yes, please. If anybody could add themselves in the minutes in a section and I'll just send a group email to everybody who's interested, including people who didn't speak up. Awesome. Thanks, Urs. A question real quick maybe to end this from Victor Morales. I think that's towards you, Tel, about upgrading to the new version of, I think it's covering the multis annotation for default network. And then there's another question about the de-scheduling. That's in the Zoom chat. Do you see those? All my windows are, let me look at the chat. I'll drop it into the actual Google docs. I'm gonna, let's see. Oh, here it is. Yeah, I am considering the new multis, of course. So I'll say, you know, this is a POC. It really works, but it's a, as I said, I'm even reconsidering the whole use of operators here. I think the disadvantage of having to restart pods could be a major one. Yeah, I could use some help brainstorming how it could work. So very interested in getting feedback and trying out different solutions. You know, if you try different POCs, I think that's the way to find something good meritocracy. All right. I'd like to hear about the differences between this and the approach that network service mesh is taking. One of the things that they're doing is dealing with being able to make modifications after the pod, that we're out of time. So maybe that can be a followup discussion. Yeah, it's an excellent question. And personally, I don't think that multis is opposed to NSM. I think those multis could be an implementation of NSM. I can see all this working together, possibly. Oh, it can. There's been perfect concepts using multis directly. They're definitely complementary on the multis side. I'm specifically talking about the KNAP. Right, right. Yeah, I'm thinking about it too, but I'm sorry, I have a really sharp, I have to leave right now. Oh, I understand. You know, for the group. Thanks everyone. Next call is 1100 UTC. That's 3 a.m. Pacific time. And for the next call in November. Excuse me, just one question. This recording of this meeting will be available? Yes, if I will check with the folks who have access to the recording and make sure it's available. Yeah, please, because I think some key point is because I missed some initial parts. So if you can, yeah. Thank you very much. Sounds good. Thank you. Thank you everyone. Bye-bye. Thanks. Bye-bye.