 Okay. So got the meeting notes up. Can someone share a screen with the meeting notes? Are you able to share a screen with meeting notes on? One second. Yeah, let me go get to the meeting notes on my, they're here somewhere. There we go. They're even here recently because I added myself to the list. So let me go ahead and share that. Okay. Well, let's go ahead and get started. So first, is there anyone who needs to be added to the list who is not able to add themselves? We can add you manually or have someone else add you. Yeah. Yeah. Al Morton can't reach Google today. I am having some difficulties sharing the meeting notes because of I'm not sure what yet. Could someone else? I'll try. Okay. What the heck of desktop one, I guess if I just share my whole desktop. Okay, share screen and you'll see all my ugly secrets and then go. Can you see me? Okay. Or can you see my desktop? Yes. And the meeting should be right in front of fantastic. Okay. And then I'm looking at the agenda. Oh, ups. That's May 25th. How do they get down there? I don't know how it's scrolled down. Okay. So where the heck are we? Oh, here we are. Okay. I got it. July night eight with the agenda should be right in front of you, Fred. So read away. Fantastic. Thank you very much. So the first, first question is agenda bashing. So I added some stuff to the agenda. If you'd like to have a conversation about something, then. Please speak up. Okay, so let's go ahead and move on. So first option was to review was Docker hub images. I, as of last night, I don't think Ed. Has anything on the Docker hub is, is this still correct? That's correct. Do we have something where we're ready to push? I, I'm happy to go to sort that out. If not, you know, again, I'm happy to go make that happen. Apologies. It's been a little bit of a weird week for me. No apologies. It's, it's probably. Well, what do you think, Sergey? Do you think that we're at a point now where it would make sense to do testing to start pushing some of the images up? Or do you think we should wait? No, I mean, we could, if some, if folks wants to play with it with what we have already, I mean, it would be great to have an extra feedback or, you know, reports on some issues. If they don't want to build images by themselves, then having them on the Docker would be great. It's not mandatory, but it would be nice. Yeah, I think it'd be a good idea to, to start building out some of that, even though the functionality is not really there to, to run workloads yet, I think it'd be good to start getting this in so that way that it's been testing and baking for a while. We can, especially with the potential coming of, of better continuous integration testing that'll, that's coming up. I think it'll help tremendously when we, when we get to that point. Let's see. We have a make file that has been created. There's, there's a couple bugs that I found in the make file that I'm going to fix today, but it basically, you should be able to type make and, and have it work. I don't know if this works in the OS X or Windows environments. It definitely works in the Linux environment. So if anyone wants to test on either Windows or, or OS X and make contributions, that would be very welcome. So mascots. Did anyone come up with any ideas for, for a mascot? So I had a, I had a talk with, actually, I was taking a look at, at potential option and tell me what you guys think of this. So I was thinking of perhaps some form of a, of a tunnel spider as a, as a mascot. You know, it's constantly weaving the web and constantly building tunnels and, and connecting things together. And we can find a way to make it really cute. Is, does that sound like a good idea or. Well, Fred is every as everybody in their youth or some people read Charlotte's web. Instead of saying some pig, maybe their web could say some network. That might be cute. I think it's also good to get something cute that we could maybe do a stuffed version of at some point. I know from experience with other communities that when you get to the point where you, you were sort of making swag stuffs mascots make amazing swag. Yeah. And one of the things that I particularly like about it is like, like in the Linux and go mascots that they're, they're alive. They can do things and, you know, they can, they can fish or they can play golf or whatever. Yeah. And so, so I think something, something that's quite live. Yeah. It's, it's definitely a positive. So, it's have to be. Sorry. Does it have to be an animal? It doesn't have to be an animal. Because I mean, as an alternative, you know, like in the old days, the telephone system was basically human driven. So there was a nice girl switching the connections. If you could get some sort of drawing on that, switching the connection, that could be also a possibility. Yeah. Definitely. If you want to keep in 90s space, you could go like splinter from the Tunnels. So constantly traversing, traversing the tunnels and making them a safer place. Because as we know, the internet is a series of tubes. So. Well, I will let, I will let Ed think of these things and see what he wants to do with it. Cool. Okay. So we can revisit this topic as well. There's a lot of important stuff we want to get to. So Tom, any, any luck with, with some of the documentation? Yeah, I, I, I'm not, I'm trying to do end to end here. And I, I, I didn't, I was hoping to get, you know, pull requests ready by last night, but it's going to take another day or two. I, I was trying initially to work with a sent off distro, of course, as you might guess, but I ran into some issues with just trying to lay up the packages as such a basic stuff. So Google protocols buffers would work with, with the go versions that I could do in the mixture of upstream and packages. And I went back to Fedora 28, which is, you know, as documented here is much better. And that's much quicker moving along. So when I get the things updated for Google 28 and everything built and running, I will document the process. It shouldn't be very long now. Sorry for the delay. Oh, no worries. You know, we're happy to get any, any work, any help from anyone. So the, the one issue I have, I wanted to bring up that's, that is hard to deal with is there's a lot of run, a lot of md files that sort of tell you bits of how to, how to do things. And I decided to write one from scratch. So I don't know whether I should, so I'm just doing a pull request with justice file. And then we, hopefully that'll start the discussion of whether we want to consolidate this, the startup and build documentation or not start up, build and run. So it's just, we can wait to the pull requests to start that discussion. It might be better because it looks like we have a lot of things on the agenda today. Yeah. So I guess just a teaser. We're, we moved the wiki to the, to, to the main repo. And. Okay. Yeah, that's right. So it should go in there. Yeah. And the plan, the plan is to, at this point is, is to eventually generate, Hugo based website. And, and that'll give it much better layouts, much better table of contents. And. Yeah. Right. So if it, but it seems to me that if this is an MD file, which it is, then that should be a step toward that. Absolutely. So Hugo takes markdown and it generates HTML for those who are not familiar with it. But the general idea of having three or four. Dot MD files that we have to, if we're going to have multiple ones, then we just have to organize them in hierarchies. We'll agree with each other and make sense and look like, oh, you know, so, but it's just a discussion. We have to go given this. I will, I see the note here. But, but, uh, let me, uh, don't know why I didn't look at this yet because this must have just pulled up in a recent poll request. So, or in a recent poll. So let me look at it and see how my stuff fits in since I haven't checked anything in yet. Or it's set a poll request. I should be fine. I'll make sure it goes, it fits into what we got here. And then we can discuss what's go further. What to do further. Okay. I'm done. Okay. So, uh, Taylor, um, were you able to add any documentation for the CNCF? I think it was a CNF project. Yeah. So after talking with Ed, and I had started some initial work on this. And I think I was going at a different approach. So I was looking at kind of the general overview versus focused on, um, working with the NSM project. So I'm redoing the write up, um, to be focused on what do we need in this project from NSM. So where is that integration point? And talking with them, it may be better to open like an issue or a. Pull request or something similar to in the Kubernetes side, where you have the Kubernetes enhancement proposals, the caps or something like that. So have a essentially a write up, which could be an issue or whatever and say, here's the project, the CNF project, and how we're deploying these containers in Kubernetes and where they fell to, um, we, where we can't use the Kubernetes ways or, or build them containerized without stepping outside of what Kubernetes offers, which is the networking side is, is what we're needing. So the focus more on that and highlighting. So given overview, which I had already been working on, but I want to highlight those parts where we were actually blocked. And there are specific areas where we're just fully blocked. So I pivoting to do that. So I think my question for y'all would be, do you have thoughts on where something like that, um, should go a, an issue or it could still be a doc, um, like we were talking about or the wiki either way. Uh, but it's more of a request for comments on what we're doing and then helping to drive, um, any features that y'all might be doing, um, to get feedback and help y'all in that direction. So, um, yeah, do you have anything to add towards that? Since you had a conversation on this. Yeah. So I think probably doing is an issue. An idea because I, um, you know, obviously, uh, being able to, to meet those needs is, is certainly among our objectives. And, and I think being able to get something you can use sooner rather than later, um, at least for me personally is something of a priority. Um, and so, you know, getting an issue where we could sort of dump them out and then maybe from there we can break them down into smaller, more actionable issues. Um, thanks for being an excellent idea. I see you're all over it. I'm glad you said that. I have this vision of you sitting there with the, the submit button, you know, cursor. Yeah. I mean, my guess is it'll probably get broken up into smaller issues as we go, but, but just getting the information out there makes it trackable. Absolutely. We may end up just closing this out. It may be more of a conversation issue to build out what. How do we want this to look or, and this could lead to enhancement proposal type thing for y'all or whatever. So great. I'm working on a draft and a Google doc and we'll get that into a, a get hub issue. Okay. Fantastic. So I think, um, I think that's a, that's a really good next step. Um, Okay. Yeah. Hi. Um, actually I added the document in the bookie, but it's kept my mind that I had, that we had to move it into the docs repo. So I promise I'll send it today so that folks can review better. For that, I also created a full request, implementing all that functionality. And I saw it last night that I need to revisit. I'll do that today itself so that folks can provide speeds back on that. In fairness critique, the moving the wiki to the, uh, to the repo is a very, very recent. So, you know, it's not like you weren't paying attention. It happened really recently. Okay. Yeah. So I'll do it now so that I'm in the flow. And yeah, and I created a PR regarding this sidecar admission. So feel free to provide some comments and I added some very basic scripts just to automate the whole process. So that people don't have to type a lot of commands. Now all those scripts are doing it. It's creating some SSL certificates, PLA certificates, and creating those as secrets in the Kubernetes, in the Kubernetes, using the Kubernetes APIs, and all this can be automated using an init container. So when somebody is trying to deploy, they can just deploy a YAML file and an init container comes up and does all this work for us. So that would be my next step. Once we, once I get some feedback about this approach and what we are doing it, because this is a little bit manual. And I would like it to be more automated. And so that nobody has to run those script to create those SSL and everything. And it gets done in the background. Cool. So definitely looking forward to the, to the pull request on, uh, on that. Yeah. Yeah. I will fix the pull request. I already have, and I've created a new pull request for automating on this. Yeah. And definitely everyone who's worked on, on things sources like thank you very much. You know, even if it's not checked in at the moment or anything like that, like we're, we're very grateful for the, for the work that's been put in. Um, just backtracking a little bit. Uh, recall, uh, one of the, uh, re-announcements. Uh, remember that there is a cloud native network function seminar at the open source summit that's going to be on Tuesday, August 28th in the afternoon. Uh, so for those of you who are going to be in Vancouver during that time, uh, make, make sure just to make sure to sign up. Um, Okay. So a couple, uh, review of development activity. So we had a few, um, uh, starting with issues closed, uh, we, the ones that were the most prominent to me were, uh, we had the shell check was, um, uh, has been added to our bills. So you can, whenever you add any shell scripts, uh, it'll automatically be a part of the build and, and help find issues with portability or, um, uh, or otherwise. And, uh, the integration tests are now rentable from console, uh, from both console and Travis. So, uh, you should be able to, to run the integration tests and see them work. Um, from the pull request, uh, we now have a pluggable architecture. And, uh, uh, Ed has done a tremendous amount of work towards, uh, providing us with this to this point. Uh, so basically we, we want to have the project set up so that we can, uh, add and remove plugins based on the needs of, um, of whatever it is that is being, that is being built. So this is an important step towards it. Uh, you want to add something? Yeah. Just to give one concrete example of why this is important. Um, you know, the process of writing things like the network service manager. And by writing with a plugin architecture, um, we can make it trivial for people to plug in, replace some components for various things. Uh, the biggest example of that would be data plane where basically if you have a different data plan you'd like to use with NSM, it really should just be as simple as write your data plane plugin. And then, you know, take the NSM plugin and plug that into replace, but it has the data plan that you want to use. And then away you go. Um, you know, the other, uh, uh, other benefits as well, like even the configuration management is done as, uh, as a plugin. So if you want to swap out the configuration management for, uh, for one that integrates, uh, more closely with your, um, with your operational environment, you know, this, this makes it possible. So. So definitely, um, it increases testing because you can mock out pretty much any, any major component. Um, so we now have a, uh, network service endpoint, uh, example. Uh, and Sergey, do you want to give a word on, on that? Uh, yeah, sure. So, uh, to be able to run end to end testing, basically we need three components. Uh, NSM client, uh, NSM, uh, as a server and the NSC, which offer its channel or whatever, uh, service it, uh, uh, offers to the NSM. And, uh, so we had NSM client and NSM and, uh, I just pushed, um, uh, last beats for NSC. And what it does is, uh, so the way it works now, uh, initially when NSC comes up, same as NSM client, uh, calls for NSM and, basically gets the, uh, socket for further communication. And then it starts, NSC starts the GRPC server, which is waiting for, um, uh, connection requests. And when there is a client, which is NSM client, it needs to connect, uh, specific network service to the specific channel, then, uh, NSM client calls to the NSM. And then NSM, on behalf of the client, calls to the NSC to get the information required for programming of the data plane. So this chain is now working. And, uh, basically the next step would be to add the data plane programming for, uh, for these two NSM client and then NSC. Yeah, that's it. If you have questions. Um, let's see, we have Cobra and Viper, uh, that we're adding into the system. So, uh, these are go frameworks. Uh, Cobra is designed to, uh, to integrate with command line, uh, basically to build out a command line set of parameters and pretty much a very large number of major projects use, uh, use Cobra. So, uh, Docker, I believe Kubernetes does, uh, uh, uh, uh, uh, I believe it does. Uh, we're also bringing in Viper, which is the sister project of it, which is designed to manage configurations. And so Viper allows you to. Store your configurations in a variety of ways. Uh, be on files such as Jason, uh, uh, or YAML or, uh, uh, I exported off to us, uh, to some configuration management server like SCD or, uh, or. So that should help us with overall configuration, execution, and configuration of the system. It's also probably worth noting there. So Viper's what backs the config plugin. And the log plugin is also backed by Logress. And we're currently doing what I'm sort of calling logging by label, which is when you set up a logger plugin, you sort of say, okay, these are the sets of labels I want all of my log messages to carry and they get added because Logress logs in JSON. And one of the nice things about that is when you then go to configure log levels, you configure log levels by a selector on those labels. So that you have a great deal of flexibility about your tuning log levels up and down in a fairly local way. Yeah, that's why I had Logress on to the list of stuff that was added in as well. So we've already spoken about the make file, we've spoke about the documents move. We've had more, more code cleanup. And this was also peers merged in the last two weeks as opposed to last week since we did not meet last week. So, okay, so onto some of the main main agenda items. We are working towards trying to become a Kubernetes workgroup member. Ed, I'll let you speak on this. Yeah, so not the last signature meeting but the one before we went to see networking and say, look, hey, you know, it looks like there are these two mechanisms for us here. One would be to become a signal working sub project. The other would be to become a Kubernetes working group. And the opinion that came back from signal working was, we would really like to sponsor you as a working group rather than as a sub project, which I think is a perfectly good, good answer. And so I opened the PR for that. We're going through the sort of the normal dance there where, you know, the folks from signal working who are sponsoring us need to speak up. There's a little bit of confusion right now. Somebody closed the PR because they thought it should belong in signal working proper. That's going to be a discussion between people like Tim Hawken and the, the sort of bigger picture Kubernetes folks. He is about to go on three weeks vacation. So this discussion will likely go on. We'll likely have a bit of a pause and then continue. But, you know, it's all sort of the normal process for working groups coming into existence. Ed, could, could you like describe those options a little better? I think I understand it, but it sounds pretty critical about in what sort of base assumptions are made about your project when you join as a working group member. Yeah. So being a, being a signal work sub project would basically mean that we would do the network service mesh, you know, in the Kubernetes repo and as a sub project of signal working and signal working has said they like two things about network service mesh very, very much. The first thing they like is that we're actually technically orthogonal to the standard Kubernetes networking. They think that is awesome. But the second thing they said is they also recognize the possibility that we could be used to solve problems like multi tenancy for Kubernetes networking. And so at this time they don't really want us to strictly live under a signal work in the Kubernetes repo. They like the fact that we're working independently and orthogonally it preserves this as options best for them. And so for that sort of thing a working group is a much better structure. I think this is actually probably good for us. I think we can move much more quickly in an independent repo, not as a sub project right now than we would be able to move as a sub project of signal work and more of a young growth state such that this is really the best fit for the project. But the point I've made repeatedly to signal working and I think we sort of generally agree about that here is we're really perfectly happy to go with whatever disposition signal working thinks is best over time. And, you know, they will figure that out. So I think right now our key focus is delivering something that's valuable. And, you know, we can decide what future disposition of the code looks like as we get there. And that sounds good. I think at some point though probably and maybe it's clear we're not ready for it yet but we'll have to see where we fit in with people solving the multi or attempting to solve the multi tenancy problem and whether we can and how and what we're going to work. We've had some of those discussions and it turns out network service mesh if you were to choose to apply to the multi tenancy problem is an insanely clean solution to that problem. It avoids a whole lot of problems that you might otherwise experience in trying to solve multi tenancy and that that's one of the reasons that was expressed why they they like us as a possibility for that. And as this been proposed there were a signal tenancy working group. As in this discuss that within that group that the service mesh could be an option. No, I mean thank you we should probably go talk to them about it. Let me go try and get on their agenda and get some slides together and talk about that. Another thing I do want to be mindful of is, and this is something where I probably want Tim's guidance so I may not do it terribly immediately, is that I want to make sure that we present it in a way that is comfortable for the signal working group. One of the reasons the signal working group loves us is that we're not trying to rewrite the whole world on them. And I'd like to continue to hold the you're not here to change everything stands because they, they have been made uncomfortable by other folks who are you change a ton of stuff. Make sense. Yeah, no, it's the, the, what I would sort of call the healthy politics of, you know, keeping humans comfortable. And, and quite frankly, I think before they're going to have any strong opinions, they're going to want to see us deliver something. Yeah, I think short term, the plan is pretty straightforward and the plans, the short term plans pretty much the same regardless as to where we end up as a project from an organizational perspective, which is just get the product out there and get it in people's hands, find use cases, make sure that we map to them. Yeah, so it's really, this is really about the long term, you know, where, where do we set how do we interact with other with other groups, what, what credibility do we have these besides we have something working, you know, like, how do we interoperate with other with other groups so I think I think this is not as important for the short term but incredibly important for, for long term. So, action item to for the multi tendency group. And it doesn't, it doesn't need to be this particular week we can work, even just working out when the right time to talk with them and keeping that on the, on the back burner would be would be enough for that so. So I've added that on. Is there any other questions on this particular topic or should we move on. Okay, so the next thing was the continuous integration and I added this agenda item before I had a little bit more to add to this all Adam, I'm going to add a link real quick, but basically one of the things that we realized was that we're going to have limited Eventually we're going to be limited by by Travis, in terms of the type of testing that we can that we can do. And so in order to support multi node testing. I think I reached out to the, to the CNCF cluster organization and I also had a talk with some of the people who are running the cluster packet, and it looks like we're getting strong support from both CNCF and the people who run the cluster at packet. So, so we're waiting for final responses on what's going to happen with the cluster with gain ax gaining access to the cluster but long story short, CNCF has a testing cluster for for various types of testing and continuous integration and what we want to do is we want to have either Travis farm out continuous integration test that are multi node onto these systems, or if that's if that doesn't work out well because of limitations in CI system or time, then at the very minimum have some form of a daily of a daily build that verifies the that we haven't broken the the world in a fundamental way. So, and in the long run we with we should also it appears it will also get access to some interesting hardware as well so we should gain access to DPDK enabled network cards and there's also the possibility of gaining access to to hardware that is capable of speaking more esoteric protocols. So, not a guarantee at this particular point for those for those items and at the very minimum right now we're starting small. So, that's, that's where we're at with that. And one of the things that I'm also going to do is I have a Kubernetes install installer for Ansible, it works on both CentOS and Ubuntu systems and I wrote this for the Linux foundation for their IT group, because we needed it in open and it turns out that CNCF, CI group, it appears that they don't have something that's equivalent to this so I'm going to pitch this to that particular group. I know there's a couple people from that group were here so it's a preview on something that we're going to do on that. But basically what we can do is we could potentially, we could potentially use this to install nodes onto that cluster if if there's no current way to do that from from CNCF, CI tool that already exists. And if one doesn't exist, then we'll see we'll work out a way to to integrate and potentially donate some of this to to the CNCF, CI group. What was that? Oh, go ahead. I just I had a comment. I'll wait a minute. So you're done with that. Cool. At the very minimum. At the very minimum, we're going to, in order to get the cluster running, we're going to need the Docker images, and we're going to have to make sure that we have demon sets, ready to go and and running and we I believe we we have a demon set already set up. But we'll have to make sure that that is that it's review it and sure that it's set up properly and then work out how to spin up in and run those from from Travis and then the last step after that is just to continue building out more integration tests and any time that we have some form of a demo that we run, we also want to make sure that those end up in the integration test as well. So that way that our demo paths are very well tested in our in our hardened. So quick, quick question on that. Does Cloud CI have stuff for installing Kubernetes on on packet already that we might be able to leverage. I believe that they do and that's that's part. Hi, this is Taylor. The cross cloud provisioner portion of the cross cloud CI project does provisioning of packet. It can provision Kubernetes clusters on packet. The issue with this would be use case does it fit what NSM is needing. I definitely like the idea of the ansible roles being available to Linux Foundation the IT group as a whole. I think that's going to be really useful. And then as far as what's most useful for NSM, if this works out, that's great. And definitely we can take a look if desired at the cross cloud stuff. We talked about that before and I didn't know where that where things were as far as the automated testing. Yeah, I believe. Yeah, that's my my understanding so cross cloud has something to install on packet. The ansible role was designed for originally for spinning running on a set of of VMs or bare metal hardware that that already exists. And so a cloud provider was was not present in that scenario. But yeah, if it turns out we can just use the crowd CI Kubernetes provisioner to spawn and kick these things off. Or we can make a commit to to help in that path. Like, I think that would be that would be really helpful. So I think there's a there's a cube spray project, which is already using ansible playbooks to deploy curies. Could it be augmented? Yeah, I tried cube spray earlier and I had some, I had some trouble with it. Yes, I forget what the trouble was. But yeah, that's another thing that we can look at as well. See if it works in this environment. So also look at that. There's really a large number of tools out there. So it's hard to say this is the one to use without really looking at the use case and deciding. The cross cloud CI has a lot of components and if you go there or you look at the dashboard at C and CF CI, there's a lot of different parts. It doesn't mean that all of those are necessary. But the part that's probably most relevant for this would be just the cross cloud, the multi cloud provisioner portion. Drop a link in there and it supports all the clouds that are currently active on the production, which AWS Azure GCE IBM cloud is actually the container service. So it does support like using different container services and then packets an example of bare metal. It's allocating the, the hardware resources from packet using the API and then provisioning Kubernetes on this and then open stack. We just added vSphere recently. It's could target other things, VMs and or your own hardware as well because we've done like Pixie boot and stuff. But again, there's a lot of choices. I mean, digital rebar and crib they do bare metal. So I think would be good is to look at how the testing should happen. We were saying earlier, running it daily and looking at the builds but looking at what we want as far as output. Yeah, I think that's good advice. What I'm hearing the next step that we have to do then is, is really nail down the, the use cases before we can, before we can really work out what tool to bring in. Yeah, that sounds like a good idea. And the last, see the last major item on the agenda is the, what we're doing of the documentation. So last time we met, we ran into a couple issues with how do you review things, how do you tell what's changed within the, within the documentation and GitHub, despite the fact that it stores everything in a good repo and the wiki does not extend the GitHub repo tools to that would to that wiki. So we moved over to the to the main to the main repo so basically everything's been moved from wiki to .com slash docs doc s. And so if you would like to add anything to the, to the documentation, please add it in there. And the current plan at this point is to number one build out more documentation remove sale documentation cleanup. And the second step is, we want to set up Hugo to generate a, a site. And once we have Hugo generating a site, then we're looking at a tool called Netlify, which can, which is capable of, of triggering on on a commit, rebuilding the site and then, and setting other setting it up with hosting and then pointing network service mesh.io towards it. And so, so ideally what we'll be able to do that is we'll be able to add additional documentation to the to the site through this mechanism and have everything show up nicely laid out on the, on the main, on the main page. So, so the bottom line is, is if all the docs are in, if all the docs are in the docs directory, and we can review them and modify them and update them and then generate a site based on and basically have the ability to update the site. Yeah, it's great. Sounds great. Exactly. And so, once, once a commit is accepted and has been merged in, then that's trigger and automatically update the, the webpage. So, so I thought that it's a little bit more work than the wiki. But I think the end result is is much nicer. And it also promotes also promotes code review on our documentation and we can look at documents as as code is as well. So, this is one last thing that we need to add in I forgot to add it on to this. Eventually, we're going to also have to generate Godoc and push that into the documents directory as well. And so, we'll have to work out a way to add that into the build system to make sure that Godoc is generated when you when you run the build and make sure that we add in the the appropriate checks within it, you know, we'll work out if we can get that for the medified to run Godoc directly which would be ideal. But if it if it can't or won't then we'll see about getting setting it up within the build system itself. Yeah, I think it can but I haven't actually verified that yet. So, yeah, we can definitely figure that out. And a couple comments that Holger added on as well. That I missed from earlier. Holger, we'll take a look at spinnaker as well. So thanks for the suggestion. So anyways, that's that's it for the main agenda is is there we have a few minutes left is there anyone any other topics that people would like to to bring up. Or any questions. Okay, yeah, very quickly. I am chair of the NFV segment sent us so I always have a back agenda of looking at what and asking people what what packages we need an NFV in the sent us distro and just keep that in mind. And if anybody wants to sponsor this or some combination of packages we're talking about as met as being in the NFV SIG and available on the sense on sent us district we have a we have a process for that. So, please contact me or or look on on the dev mailing list on sent us and look at sent us.org and you'll see pointers there to the NFV SIG and other useful. Excellent. I appreciate you raising that. Yeah. Even if the network service mesh itself doesn't end up in it we should start thinking about our dependencies and working out what what should what should land in there so like, Well, I'll give a simple example like if VPP isn't it is a dependency having a VPP package would be useful to That's already there. VPP is already there. But we don't. Yeah. Yeah, so any other packages people can think of that we that we need to do it like, yeah. Yeah. Yeah, absolutely. That's a good point though very, very good point. Anything you can think of or want to sponsor or anybody wants to sponsor a package. It's a it is not a zero amount of work just pushing the package through the system and so on so forth and building and so on all the all the sort of mechanics of it is it would be quite helpful to have people from this community involved in the NFV SIG for sent us. Great. Thank you for the. Thank you for the invite. So I'll definitely jump in and take a look and see what you would you already have as well. Yes. Okay, are there any other topics or announcements. Thank you everyone for for attending and our next meeting is scheduled for for next Friday. So start thinking of agenda items throughout the week and feel free to start adding them earlier on so I like to try to get agenda items in a bit a bit earlier if we can, but we'll always have the agenda bashing at the start as well so even even if you forget or something comes up the last minute. Like, don't don't sweat it like we can still do we can still get it in. But if you add in the agenda items beforehand it helps a little bit more with with planning and trying to work out the scope of what we can handle in the meeting. So, so again, thank you everyone and we'll see you all next week. Hi. There was a question, Daniel raised on the chat. Is anybody here going to be at the itf. Where is that. Where is that going to be held at Montreal. And I'll be there. Yeah. You know, do they ever hold any meetings at their headquarters, which is based in Fremont. No. I've been attending since 1998 I've never been. That's what I've asked. Must be on Martin. Yeah. Perfect. Thanks.