 If something comes to mind, feel free to put a message in a chat or speak up during a transition period and we'll see about adding it in. Okay, so announcements of cloud native network function seminar at open source summit is on August 28 in the afternoon in Vancouver. So if you haven't registered and you're intending to attend, space is limited, so the sooner the better. And Tom had a great point earlier as well. He wanted to attend the seminar, but was not as interested in attending the full summit. And it seems to have found the compromise to paying the full cost of the summit, which I think is $1,000 or close to it. And that is to create to join in with a hall pass, which is 150. And then you can sign up for the for the network function seminar as part of that. So that may be a way around paying the full amount if you live nearby and don't want to have to pay the full full quantity. Okay, special. One, one thing to keep in mind there and I did want to make this point very clear, when you do register, there is a special box that you click to indicate that you are also registering for that seminar. And in the event that room gets too full, it's important to have clicked that box. Folks, I think today's the deadline and the rates go up in case you do want the full registration. I'm not sure if the rates will also go up for the hall pass or not. That's an excellent point and I don't know the details behind that but yeah, they're really the sooner the better because as they as they specifically stated, they're even asking people if they're not going to contribute to the seminar actively that that's they consider not not attending. So, like, they actually put that in the, in the next to the checkbox. So, so we need people who are knowledgeable who want to want to speak up and so on. So, you know, definitely check that box. Okay, so special announcements. Network Service Mesh was listed as the first highlighted session talk in the Linux Foundation's PR Blast about the ONS Amsterdam speaker schedule being announced. So they had a list of keynotes that that they were presenting and then they said highlighted sessions include and Network Service Mesh is listed as the first one. So, pretty, pretty happy with that. The link, if you wanted to see it or show it to people is is in the meeting notes and a bit of a of an interesting thing that we found through that we found as well is at the same time that the cloud native function seminar is going on VM world is having a CTO panel that discuss that they're going to have Network Service Mesh on the agenda. So I'm not sure how they, how they found Network Service Mesh or so on because we haven't been as proactive on the marketing side yet, but we're starting to we're starting to get noticed and people are starting to talk about us so I have some background. I have some background on that. I think I know how they found out about us. So, so in addition to VM world VMware runs this future net conference they've done for the last two years, and they're doing it. Again, it's kind of a smaller, like invite only, you know, networking conference. I gave a talk at this a few years ago so this year I actually, I tried to get a talk at future net submitted now they moved it so it's only one day this year, instead of two so they, they were pretty selective. So, so I think that's how they heard about network service mesh because I was trying to get get us get something on the agenda at that. But, but I'll still be attending that so I plan to kind of talk about network service mesh on the hallway track for that as well. I can put a, I can put a link to that in there to actually maybe I should do that that might not be a bad idea. Okay, cool. Let's see. Yeah, that'd be good to add a link. Oh, forgot to ask. Can somebody share the agenda as well. That way that just make sure everyone can see it. I can do that just a second. So I've seen is not with us today. She did it last time. So I'm the poor substitute, but I'll do my best here just a second here. I'm slow. That's a crap on my desktop. Cool. Thank you very much. Well, and. Okay, so agenda items that we are tracking. So things that some of the highlights that we've gotten done. Include the, the getting started guide that. Thomas Herbert is put together. So it's, it's quite detailed. We need people to give it a try. And then we can make, make recommendations on areas that they can be improved in it. Overall, I think the document is, is great, but there's also areas where. Where we can definitely get some, some additional input. So I believe like the focus was primarily on said to us. So someone needs to fill out the two sections and, and so on. And then we've also added a ability to capture stack traces in, in our, on our errors plugin. So now when you, when I, when you run log risk, you can put the log risk plugin that we have that we can, you can add in dot with stack with stack trace dot error. And you put in whatever your error is. And it will automatically inject into the logging mechanism. And then you can add in the stack trace of, of the, of the error. So definitely, definitely going to be useful. Tied to that is we're going to move our errors from our actual error handling itself to, to be two parts. One of them is actually reporting is going to be through log risk. There's a project that we're bringing into the fold, which is called go errors, which is an error system. Quick, quick comment. Tom, we can see when the email you're composing. I don't know if that's intentional or not, but I wanted to make sure you were aware. Sorry to interrupt. Please go ahead. No, no. That's important. Good point. Okay. So thanks. So, so we're moving over to a project called go errors. Go errors are effectively a better, a better package for manipulating and managing managing errors. And that go errors is ultimately will pass what manages the stack traces for us. So it's a integration of the two between go errors and, and log risk. Okay. So we also have had some, some improvements on our SRILV path. So I'll let Sergey talk a little bit about some of the work he's been doing with SRILV. Right. So there were a couple of, couple of directions. So the first direction was to develop a tool, which would scan a host and detect all VFs defined or existing on that host and then put together a config map where all required information for, for the controller will be placed. And then there's a, basically to, since the tool cannot really understand if this VF belongs to this specific network service, then there's an intermediary step where a user needs to edit the config map and map each VF to a desired network service name. So once it's done, then this map will be instantiated. Basically you create, Qubectl create minus F for that file. And then the controller detects the, that's the second part, the controller detects instantiation of the config map and it create and advertise these resources to the Kublet. Once it's done, a port can request SRAV VF or multiple VFs by referring to the network service in the resource section of the port spec. Let's say you want to say resource and then network SRAV and then you provide the network service name. Let's say one, two, three, and then you end up in the, the port with four VFIO devices, which you can use at your pleasure. So that's, I mean the first part, it's pretty much completed, it's ready to be merged. The controller part, it's almost done. VFIO device, which is in the container is operational. Jan provided a nice testing tool. And I'm working on some cleanup and to add the lead and update functionality. So when the config map gets updated, then the controller will detect it and react accordingly, removing or adding some advertisement for VFs. So that's pretty much. This is actually really important because there's a segment of folks who are looking, who have use cases that our service mesh is seeking to solve, where they have use cases where being able to attach a pod that needs a network service to a SRAV VF that is going to provide that service is a really, really important use case. So this is a really important thing to be able to do. I know that people are using SRAV now in containers. Currently, I don't know how they're orchestrating, I think through the CNF. But certainly there is, there is a lot of interest in giving people a migration path to more portability with other data planes as well, perhaps all within the container and NSM context. I think it's great. Yeah, there's another problem to solve that's actually really kind of critical. One of the problems that people are trying to figure out right now with getting SRAV VF into a container is, not all SRAV VFs are created equal. They have different characteristics, they connect to different things, et cetera. And so there's a group of people right now who are actually trying to add the, effectively the SRAV VF information directly into the pod spec, which is kind of ugly and messy. So you literally name a particular VF on a particular host, and that's kind of ugly. This is nice because it measures well with how the current Kubernetes scheduling works. And it also allows you to address the VF as a network service, which is a logical entity. So, you know, when you bring up a new node that you, you know, just taking a very simplistic example, if you've got a radio network and you have a bunch of VFs, a bunch of nodes that are able to reach that radio network. Now you can simply say, look, I need this resource, which is the radio network service, as a hardware resource, SRAV hardware resource, and get scheduled to a pod where one is available, the node where one is available, and get connected up with that. So I think this ends up being really, really good because it's so much simpler than having to track the names of the VFs on all the nodes, and try and schedule that way. Yeah, so, so in short, we're moving, we're getting some, some good traction with getting SRAV on board. So, pretty excited about that overall, about that overall path. Let's see, we've also, we've also have work being done to publish images on Docker Hub. Kyle's been focusing on that. Can you give an update? Yeah, so after a bit of a false start this week, I've got a pretty good path forward, especially after talking to Sergey today. So I think what I'm going to do is create kind of an NSM Docker Hub ID. And then in, in the Travis, in Travis CI in the control panel, we can actually, we can actually put the credentials for that there. And then we should be able to pull what we need to be able to push from Travis directly that way. After talking with Sergey today, he's, he said he's seen this done this way for a bunch of other different projects as well. So, so Frederick, it turns out the patch that I had that I closed the pull request, I should be able to reopen that, removing the hashed credentials from there and get this work in later this afternoon. Isaiah, so, so where exactly do the credentials end up living them? They end up being stored in the Travis, in the Travis UI. So, so they end up being stored that way. Okay. And apparently Travis does not dump these in logs anywhere from what I can tell. So, you know, it won't, it won't accidentally spittle those out that way. Yeah, my, my only concern with it is if it's passed into an environment variable or something similar, and it's run, and it runs through a commit. And that's, that's my concern as well. And that's the thing I need, I want to verify to make sure that, that somehow they, there's a way to ensure that we, that type of malicious behavior we can't, you know, that they don't allow. I know the Travis has mechanisms for doing that. Effectively, for making sure the variable is only set when, when the job is run and, you know, for certain purposes in certain areas, you know, by certain users in certain ways. I, I remember looking into it at one point and being kind of impressed with how well they, but definitely something to check out and make sure we get right. Yeah. And that's the scenario. I think if we set it so that it's only on merge, merge a master or Merch, or merge to a specific branch, then I think we should be good. It's just, you know, we don't want arbitrary echoing of the password. Exactly. Yep. Totally agree. So I'll, I should be able to, you know, hopefully get that, verify all of this and, and figure that out and maybe get something pushed out this afternoon. Cool. Um, okay. Um, Ed, you have something called simple start to device plug in. Yeah. I'll let you discuss. Hang on. Let me take a quick look at it. Um, simple start to device plug in. Oh, yeah. So this is a work in progress. I was factoring out a, a bunch of places. Um, and I was in, we expect to use it in more. And so I was looking at, uh, factoring that device plugin. I factory the device plug in piece of the work for Kubernetes device plugin out into a reusable plugin that could be used other places. Um, this has the potential to be not only really good for us, but really good for a lot of people because right now when people write a device plugin, they go sort of hack it up by hand and having looked at quite a few device plugins, the quality of the device plugins, the quality of the device plugins, the quality of the device plugins, the quality of the device plugins in terms of their handling of the standard device plugin stuff varies wildly device plugin device, what, the device plugin. So hopefully this would make things relatively easy for people who need to write device plugins and not just for us. Are you muted Frederick? I am. Thank you very much. Um, great. So we, um, yeah, thanks for, thanks for the, the update on that. So definitely looking forward to, to seeing the, um, the pull requests. Um, okay. So we have a task that's set here for adding sidecar containers in a network service mesh and it shows, uh, uh, critique is being, uh, has being blocked. Uh, is there something that is, is there something that you're still blocked on? Yeah. So this is directly to the bug in mini queue. So once we move to packet where we are running the actual Kubernetes clusters, so there, we should not have any problem there, but, uh, as a worker on, I was thinking, I can comment on the CI CD, the CI part of the, of this pull request and the actual functionality can go in. And once we, either the mini queue bug gets resolved or we move to packet for Kubernetes cluster, then I can enable the CI. Okay. Um, yeah. If you, if you, if you, if you need any help or anything with that, uh, let, let us know and we'll, we'll jump in and do what, and do what we can. Um, yeah. We, right. Just, just for the, uh, for the overall epic. Um, when it, so we, we have the, we're working on publishing images to Docker hub. Once images are published. Uh, our Damon sets are pretty much all set up so that we should be able to pull from them. And that'll, that should make the overall path to testing on, on the packet C and CF cluster, uh, viable at that, at that point. So, so once we, once we get to that, then, uh, the pack, the C and CF cluster random package should become available for, for testing these kinds of things. So, so, shall I just keep it in this state or, uh, comment out the CI portion, the testing for now and let the PR go. My recommendation would be to get it in now. And the reason why is that, uh, if there's any refactoring, we, we would, uh, it's better to, to have this patch in and have it refactored with everything else, uh, rather than having to have you go back and do a lot of rework, because things have changed. So, uh, network service mesh, uh, mascot, um, I think Ed had some, some comments on this. Uh, I, I, I'm getting pretty good feedback on, uh, Ariande the spider, uh, which I think is good. At some point we'll have to get someone to redraw, um, Ariande the spider, because while we do have a legal right to use it, I got it properly from a stock photo site. Um, we don't have the, um, we don't currently have, we wouldn't be able to use it for a trademark thing. So we will eventually need to get it redrawn. So if folks know good artists, that would be a useful thing to know. Um, but people seem pretty happy with it. Uh, Ed, I mean, uh, sorry to interrupt you. I mean, I, I do know a good artist, but I'm curious if there are, I mean, if there are artists that want to be able to ask to do it, just to do it. I mean, are you planning to have any incentive for the artist? Yeah. So, I mean, that's one of the things we need to figure out over time as we mature a little bit. Uh, it kind of relates to, I think another item we have where we're exploring, becoming like a, when it is working group or something like that. And once we have some kind of a formal role in the CNCF, they will typically have some funding for the sort of thing. Um, but I think that's one of the things we need to figure out. Um, I think that would also give us access to Linux foundations, create a resource group if we would like to as well. But no, I'm, I'm, I'm all about not asking artists to go do work for free. Their work is of value. Okay. Thanks. Okay. Um, so we've already spoken on the, uh, SRV side. So, uh, do we want to talk more about, uh, the Kubernetes work group, uh, member process. Um, Yeah. So basically we're, we, we are trying to, so we've gotten feedback, um, from SIG networking about the coming of Kubernetes work, working group. Um, right now, I think the thing we're trying to do is to put together the, Hey, we'd like to be considered as a working group email. Um, and sort of trying to work out drafting that. Uh, and then we'll have to see how that goes. Um, the other thing is Tim Hawken, who's been a great supporter of ours. It's been out for the last few weeks. These are the beating back. So I expect to start turning the wheels on that shortly. Okay. Fantastic. So, um, see going down the line, uh, document infrastructure, there's no updates to, to that, uh, for those who were not present in the previous meeting on this, uh, basically we want to run our documentation with Hugo, uh, and basically generate some, uh, some nice and well laid out, uh, documentation for our users over time. So if anyone wants to help out with that, uh, any help is, um, is, uh, highly appreciated. Uh, I have a document that I'm writing up on how to get a, on how to gain a privilege container, uh, to an existing container. So this document effectively, effectively what it is is if you run a pod, the pod runs without privileges and privileges is, is effectively root on the system. And so in order to add capabilities. So for example, if you wanted to add an interface, you need to have access to at least some, at least a privilege like net admin in order to, in order to do so. And so I've, uh, I have a document that outlines, how do you spin up a new container that can bind to the pods, network name space and, uh, have net admin access so that these changes can be, can be made. Um, and made in a way that, uh, this, this is not where the users themselves don't have access necessarily, but more, uh, just from the network service mesh side in order to protect the, uh, security of the, of the cluster. Uh, Frederick, I have a question. I mean, uh, based on the past discussions we had, uh, like I added myself, it seems that the direction was, uh, that the NSM runs kind of a privileged container and does all the plumbing on behalf of the client. In this case, client basically doesn't need any privilege to, um, and no need to create any, any interfaces. NSM will be doing that for the client, uh, based on the requested services. So, uh, we're kind of a direction change since, I mean, you're talking now about the client doing some, some interface related work. Uh, let me, let me clarify. So the client itself doesn't do, doesn't do the work. Uh, so it's still be ran and owned by network service mesh, uh, infrastructure itself. So, so that hasn't changed. Uh, what is in the long run, uh, it would be, it would be best to reduce the total number of quantitative security, uh, mechanisms that, um, that the demon has access to. So for example, if for some reason the demon was compromised, uh, we can minimize the, the impact. And so one way to do that is to reduce the overall set of, of privileges. And it's also, it also may end up simplifying certain, certain tasks like we want to add an interface instead of having that the network service mesh and bind temporarily to that namespace or running commands that have to, or manage what namespace it's, it's, uh, manipulating. Uh, it can spin up a pod that has net admin that is capable of running the command on behalf of network service mesh, or not, not a pod, but a container. Uh, but it would be fully owned and controlled by, by network service mesh itself. So the, the pod and client would not have any privileged, uh, access at all. So it's, it's more about like refining the current, the current path a little bit more. And if it makes sense to, to move off in this direction, then that'll be helpful. And there's also another benefit as well, where if you're developing, uh, a, uh, if you're developing a, a VF or CNF for, for, for this, and you want it to experiment, it also helps a lot in the experimentation side, because you can add routes manually. You can delete routes. You can add interfaces and wire things up. So it, so it helps, uh, it helps with ad respect as well. Okay. Got it. Thank you. All right. Uh, so there was also a NSM enhancement proposal, uh, that was, uh, that was added. Uh, and, uh, do you want to, do you want to say anything on, on that particular topic? I think so the proposal reads about being a CNCF project, uh, which is certainly among the formal options that are available to us. I think at this time, uh, the recommendation from seeing networking has been pretty strong that they would like to see us as a Kubernetes working group. Um, or as a fallback as a Kubernetes sub project under seeing networking. Um, you know, but with a strong preference for Kubernetes working groups. So, um, my guess is that we will proceed as a Kubernetes working group unless something goes awry there. Uh, do folks have other thoughts or opinions or feelings on the subject? Actually ring into the, uh, get into the GitHub issue. It's, uh, something a bit different. So this was actually from the Volk, uh, group, uh, talking about the writeup of how NSM could help with CNCF, CNF project. So my apologies. Uh, I apologize. I didn't. I don't see Taylor on the call, but I do see Watson on the call and, um, Watson, is this something that you can, uh, that you can discuss? Or should we just punt it till next, uh, next meeting? Probably want to wait till, uh, Taylor and you see you back. Um, on that. Okay. We will, uh, we will wait on that then. Thank you very much. And, um, so there was also a new, uh, a new issue that was added about separating out the concerns for audiences of NSM to make it accessible. Uh, by Dune Hammer. Um, are you on the call? So we do seem to be. John. Yeah. Yeah. We just seem to be missing him. I think it's a good point though. Uh, Fred, um, do you want me to scroll down to the action items from, from, uh, last week? There's, I think you're actually reviewing them and it's a huge list. I'm still looking at this. There's a link. There's a link in the agenda to a, a project board that he's actually reading from. Oh, okay. Okay. It'll actually share the board that we're walking down progressively. Yeah. Where the hell is that? Yeah. Lucita was going to organize that for us and oh my God, is it better? Oh yeah. Okay. Yeah. That would be better, but I've actually never. It's, uh, it's brand new. So, um, it's the first item under action item tracking where there's a link to the project sport. Oh yeah. Okay. Sure. Um, anyways, it's, um, we have about 20 minutes left on the call. So, um, my recommendation is that's talk about onboarding, um, uh, let's talk about onboarding OSS and ONS, uh, uh, newcomers and work out how we can best, um, help new people who are looking at the project to understand what network service mesh is and how they can, um, how they can contribute in various ways. Um, so one suggestion that, uh, that had had was we add a landing page that we can, that we can direct people to you want to, do you want to discuss? Yeah. So one of the things that's kind of about it is, um, I've been asked to present about 20 minutes on network service measure at the seminar. And one of the things that you can do that I like to do with your slides is you can put QR codes on them for links so that people can, well, there's their camera phones and take pictures. Um, and the QR code automatically gets working. Um, so if we have a landing page, it actually is, is sort of tailored for that audience. Um, you know, and this sort of gets to the separate out concerns for audiences of NSM, that audience is going to be very, very focused on NFE, which is an important use case for us, but it's not the only use case for us. Um, you know, basically that gives us the opportunity to have a place with them to land and to, um, sort of proceed from there. And we might think about what other things we might want to do to help capture the interest of the audience from that landing page. I think something else that we should add into it as well is if we hear certain ideas or concerns come up during the talk, even if they're not answered in the talk themselves, we can use that landing page as a way to engage, uh, people afterwards and, um, and answer questions or correct misperception. Yeah, I know I think that's actually probably a good idea. Um, my experience, I tend to give incredibly conversational talks in my experience is you learn more from the audience than the audience learns from you about how they, how they are understanding what you were saying and the kinds of this conceptions that arise and that makes it easier to communicate in the future. Um, and so particularly if we could get somebody who is there in the audience who'd be willing to do sort of blow by blow, live update of the page, that's also really amazingly compelling in terms of people feeling like the community is engaged with the audience. So we'll have to add that as an action item to work out, um, uh, who's going to be there and out of who's going to be there, who's willing to take on that role. Are we looking to do that, to do that now? I mean, I, I'm planning, I'll be there on Tuesday for that session as well. Um, yeah. So I mean, that, that might actually be, you know, I think just having a couple of folks in the audience who can, you know, push, quickly push, review and merge PRs as we get the questions that can provide big, you know, basic answers to them. Um, like I said, that that shows a really strong commitment to engaging with your audience at that point, which I think will actually make us look extremely collaborative. Yep, definitely agree. So another, um, another thing that I think we should be prepared for, like we don't have to flush, we don't have to, to get a full set up now. But if we start talking about, like the entire, the entire open source, uh, CNF thing is talking about what is a, a CNF and it's, it's pretty the vocal group at the moment is, um, is not here at the moment because I think that this would be a really great topic for them. And I'll make sure to bring this up with them later on. Um, but one of the things that really helped on the Kubernetes side for app developers is they had this set of heuristics that they call 12 factor apps. And if you follow the heuristics of a tool factor app, ideally you'll end up with something that can scale horizontally and fits very well within the Kubernetes model. And what, one of the things that I think we can do is we can do something similar on the CNF side and say, like, you know, I have 12 factor apps, maybe we have 10 factor CNFs or something similar to that. Uh, there are basically a set of heuristics that, um, that help people build scalable CNFs on, um, on their clusters or, uh, that their clusters can be controlled, uh, that their CNFs can be controlled by. And so, um, so I think that, like that particular conversation, you know, start thinking about what type, like what does it mean to be, to be a CNF and like, how can we get that horizontal, horizontal scalability? Because, and just to drive the point, um, if you look at like why we're, why we're here, one of the reasons is you start looking at the scalability, scalability issues of trying to drive through the monolithic DNFs. And, uh, how, how is that going to scale when we started to hit 5g edge, internet of things, based traffic and having a system that is capable of horizontally scaling becomes very, very compelling. And so we're going to have to have, uh, re-architecture or re-writes of DNFs to CNFs. And this particular crowd, um, absolutely fantastic when it comes to understanding the network side and, and how things traversed through there. But a lot of that expertise and how do you build horizontally scalable apps is, is more in the enterprise, is more in the enterprise side and, uh, and app developers. So we can take some of our knowledge from the app developer side and help, help avoid mistakes that, um, uh, that they made and learned from, uh, while we're developing CNFs. So, um, so in essence, I, what I'm proposing is that we, is that we start coming up with a set of heuristics to help drive what, what does it really, what does it really mean to be a, to be a CNF? And we try to draw in other groups and organizations who can, who can help with that. Uh, does, does that make sense? Totally. Yeah, I think that's actually a really great idea because I, I kind of feel like that type of guidance and, and what you're suggesting is going to be broadly useful. Um, so if we can, if we can have something like that to, to, to frame that discussion, I think it's going to be really useful in, at that Tuesday session. Yeah. And, and this is, this is going to be a little bit more difficult than the toll factor app approach, uh, because the toll factor app, uh, how do you, how do you horizontally scale a system? Uh, a, a web app was already relatively well understood. Uh, it was just a matter of, you know, getting this message out for people to, uh, to head in that direction. Uh, we're, we're going to have a more interesting time with this because we're talking instead of just one group of people that we're targeting, it's like app developers where we're targeting multiple, multiple groups and from multiple industries, we'll have intertwined interests and intertwined, uh, uh, requirements. Um, but are certainly different enough that one set of heuristics may not work for, for everyone. So, so we have to think carefully on, on this type of guidance, but I think this type of guidance is going to be, uh, is going to be something that is absolutely necessary to, to help progress the industry forward at a faster, at a faster pace. Yeah. I mean, this is actually a really crucial point because cognitive asked everybody to rethink. Um, so when you went from, from physical boxes to the cloud one. Um, it was a lift and shift mentality. Right. You slept to be in front of all the existing things that are where you went. Um, cloud native asked everybody to actually think again for the first time in decades about how you actually write and deploy applications because there was a new space of possibility that it was not previously there. Um, I think what we're going to find as we go to cloud native NFV, um, with, you know, writing proper CNFs and that kind of with network service mesh is that it is going to require thinking about things differently than we thought when they were just physical boxes. And even when we thought about them just as, you know, slap a V in front of network function and call it good. And, and so good crisp guidance on, on how to rethink things is going to be something that's going to be helpful. It's also going to be a much more interesting process. I think of the 12 factor act guys, because 12 factor app basically is a codification of things that people figured out by experience. And we're going to have a lot of collaboration with early adopters and we're going to be figuring a lot of this out by experience themselves. Um, and, and so we'll be naturally a bit more fluid. Um, the 12 factor apps was, because 12 factor app was apps was mostly putting structure around what we're already well understood good practices. And we're actually going to have to collaborate together as a community to figure out what those practices look like. Yeah, the interesting side thing to that ed is and Fred is, uh, people thinking about microservices and, and, um, and, and network functions being part of a microservice mesh, people start to get worried that the microservices mean that we're going to be forwarding packets back and forth, little slow, uh, things that will slow down the actual network. And that's not necessarily the case. There's one thing is the orchestration plan that will allow us to take a network function and perhaps scale it in bits. But, but the app, but it will have to be written in such a way that we, we don't accidentally funnel some kind of, um, you know, funnel the actual data playing packets through, um, through, uh, an interface that they shouldn't go through, so that, which will, uh, in effect, slow the functionality down. But I think those are challenges in the future. Yeah. One, uh, one example I've been using, uh, for this exact, uh, point is you start looking at IoT devices with radios in them, where they're communicating to some central controller. Um, and if that communication is managed and controlled by, uh, a network service mesh, uh, one option that we have is if the, if you have two IoT devices that are, that are located close to each other, then we could direct both devices to go direct with each other over the radio for a certain period of time so that they can get, um, uh, as quick as possible to eliminate, you know, point to point with each other. And once the, once they're done or their lease expires and they come back to network service mesh and continue to coordinate their, their network. And, uh, when you start looking at some of the challenges that are involved with that, you know, it's like, how do you pick which frequency they should, they should communicate with? How do you, uh, what happens if you have interference and, and, uh, or one of them moves away. And so we start looking from, from that perspective, there's a lot of, uh, there's a lot of things that we can help solve, uh, within that particular data plane. And one of the nice things about it is, is it drives the point that those, uh, those connections, those IoT devices don't have to go through a container through Kubernetes cluster, but they can still be managed by a network service mesh and still be part of a larger picture. And so, uh, I think that's a fantastic point that, uh, that you've brought up that we, we have to, we have to drive the point that we, we can go, we can be part of the data plane in terms of the CNFs to join up, but we don't, but network service mesh itself is, is not part of the data plane. So, so, so we don't sit in that, we don't sit in that path and slow everything down. Um, is there any other ideas that people can, uh, that people can think of in terms of onboarding, uh, OS, OSS and, uh, and ONS newcomers? Okay. Well, I'm going to add a couple, uh, a couple of items onto the, uh, onto the agenda so we can track them later on. So we need to continue our improvements on, uh, documentation and making sure that we can, uh, make sure everything's up to date. Uh, cold things that are not, uh, they're not up to date, uh, or, or update them. Uh, and, um, just to give, I could break in here and show them my, my skills. Uh, we, uh, we had discussed that before and I was just looking, I just had the, uh, the board up, uh, with the issues on the, uh, the network service mesh board and we really probably, why don't you specifically say, right up an issue for those, for those two things, improvements with documentation and also refactoring, getting started guide with a, with a real true quick start. Those are really related, but, but different issues and I'll, I'll be happy to write up the, both issues on those to track them. I was supposed to do that, but I haven't yet. I'm sorry. I said too much words. No worries. And, thank you very much for helping on this. Like, uh, the, this, this type of work is going to be invaluable. Like once we start getting these people looking at it. So, you know, any, any improvement we have on here is, is going to, is going to help drive, drive the community. Okay. Well, um, is there, is there anything else that, uh, that anyone would like to bring up on this topic or any other topics? Okay. Um, I have one last request as well. Um, if you, if you intend to, uh, to work with network service mesh, um, and, and, and you're willing to give a presentation at cube con. Uh, the, the deadline, I believe is on August 12th. So, so get your, get your abstracts in if that's something you're interested in. So I know that we've had some interest with, uh, with some of the open daylight people who have wanted to, who are talking about putting together a, a, uh, presentation. Uh, it'd be fantastic to have others join in as well and talk about your, your use cases or, uh, or where you want to go. So, and with that, I don't have anything else on the agenda. So thank you everyone for your time and, uh, feel free to join us on network service mesh on IRC and, um, have a great rest of your day. Thanks. Thanks everyone. Bye. Thank you. Bye everybody. Thanks.