 Good morning all. Usually we go, we allow about five minutes for everyone to turn up. I do want to go ahead and remind everyone that our meetings are recorded from the time that they start. So you're currently being recorded. This will be posted to YouTube. If everyone could also please go to the meeting minutes and add themselves to the attendee list. That's actually super, super helpful to us. And we'll get going here in a few minutes. Thank you. Also please add any topics that you want to discuss. I see that Michael is here. He has something to share with us. So please add your topics at the bottom of the agenda. That is absolutely fantastic. And please do add your stuff to the agenda. We actually run fairly free-wheeling meetings. I think some of our very best meetings have started with an empty agenda and become amazing. So awesome. So yes, it turns out a gender list doesn't mean there's no agenda. Just means we instantiate one on the fly. What's the old, my favorite definition of serverless is that traditionally you would provision resources and then accept requests and in serverless you accept requests and then provision resources. So a gender list does not mean that you don't have items in your meeting. It means you start your meeting and then you figure out where you're going. So our meeting minutes are world-rightable for a reason primarily so that everybody can contribute. Also feel free to take notes there as well and help us know what happens in the meeting. Also good this. Can you get someone to share the agenda please? Thank you. So we'll give a couple more minutes and then we will get started. So are you good going? Yeah let's do it. So welcome to the next network service mesh meeting. So we have this call which occurs every week. We also have a user friendly call which occurs every other week at I think it was 10 a.m. Pacific. Sorry no it's 10 a.m. CET and definitely not Pacific. The next one will be on January 21st. We also participate in the CNCF Telecom user group which occurs every first Monday at 8 a.m. Pacific and every third Monday at 3 a.m. Pacific. And we also are participating in the CNCF networking working group. The next call will be on Thursday, January 16th and that meets every two weeks at 11 a.m. We have a few events coming up. The first major, the next major one is KubeCon and the five major con in Europe which is going to be at the RAI and Amstrad. The CFPs have been closed for a while. We're just waiting for the schedule to be announced. The schedule should be announced the day after our next Asian meeting on January 22nd which should be a Wednesday. We have a set of CFPs which have been added to a spreadsheet. So if you have added something that is related or of interest to telecom networking then or to CNS please add it to the spreadsheet. We are also going to have a NSM con at KubeCon EU which will be in a larger room. We now have an event page. We have opened or rather we are going to open a CFPs which will close on February 21st and I don't think I've created a form for that yet have I? So I'll get started. There is a form. I created a form. In fairness I copied your previous form and edited it for the new information. So created like a little bit much but I caused there to be a form in existence. Do we have, have we opened the CFP for that because there's no CFP open date? There is if you look in the events overview there's a CFP link and that is open. Cool so feel free to start submitting things now. It is open until February 21st which gives you time to work out what you want to do if your talk, if you submitted one to KubeCon does not get accepted. There is an opportunity to submit it into NSM con as well and still get it heard by the smaller but more highly targeted community. We also have sponsorship opportunities available. So we were very pleased there was good response to the sponsors last time. So if you work for a company that would like to sponsor NSM con we've got a variety of different levels and ways in which sponsors should can be achieved. Yeah so feel free to have, have we loaded the prospectus up yet? Yep it's behind the become a sponsor button. Perfect so the prospectus is behind the become a sponsor. So please, please consider, please consider, consider sponsoring. It's a, it's relatively, it's relatively low in price for the value that you get if you're heavily interested in this space. Cool so you know why my Google Docs never loaded so can you switch back to the agenda? Yep. Cool thank you. Yeah I'm getting the endless loading screen on Google Docs so. Great so we also have open networking and edge summit which CFPs I believe are now open and they close on February 3rd so please feel free to submit to ONES in Los Angeles and that will, that will be on April 20th through 21st. We have, we are, we have a couple new announcements and so the first major announcements is we are creating a NSM PR and issue review meeting which is going to occur right before this particular meeting starting next week. We'll get a calendar with a, we'll get a calendar event up pretty soon on this and so we kept this short on purpose because we don't want this to, I don't know if evolve or devolve is the right word but we don't want it to convert into a, to a architecture meeting. This is purely for let's go over the issues, let's make sure things are tracking, let's make sure we unlock people who are currently blocked. So that'll occur at 730 every day right before this meeting. Yeah so excellent meeting to sort of see exactly what's going on on the ground and particularly if you want to get involved that's going to be an excellent meeting. We wanted to make sure that was happening publicly for folks and my guess is if we had an architecture question in that discussion we'll simply punt it up to this community meeting to resolve. And if you're looking for something easy to get involved this, this is the place to ask how to, how do I get involved? What's, what's the right first issue for me? And we'll, we can get that conversation started here. We finally also I have been, we also received the QCon North America 2019 report and so we also noticed that the NSM has had very strong interest higher than any other NSM, sorry higher than any other Sandbox project. And so I think, are we higher than all the other incubating as well? No we're kind of middle of the pack compared to the other incubating projects but the fact that we're in the same general range as some projects that are like extremely well established is kind of telling. So like the open tracing guys came in 1800, we came in at 1500 which is certainly not bad, right? You know we came in at 1500, CNI came in at 1700. So we're right up there with a whole bunch of those things. Obviously we're not in the same range as a Kubernetes or a Prometheus at 9k or 5k but we're, you know, given, given the age of the project and where we stand we're kicking ass. And the topic as well, I mean people usually think of medworking as like this boring thing but once it's up you leave it alone. What was the the unofficial slogan network service mesh, making networking sexy again? Exactly and so the usually thing medworking is the thing you don't care about until it breaks. Do we want to bump the SROV item down to the the non sort of turning the wheels part of the agenda? Yeah, that got mixed in somewhere. I was just trying to get that on here. Better to capture than not to capture dude. So please thank you for getting it there at all. It's easy to move things around on the agenda. It's hard to magically know that there are agenda items that we've missed. Yep, absolutely. So I'll go ahead and stick it down on the sort of substantive part of the agenda and we can figure out where we want to put that water-wise. Cool. Okay, we can scroll up very slightly and then I will do my best to cover Lucina stuff. So social media community team, Twitter stats, we have seven additional followers and we are following an additional 37. We have added eight tweets for a total of 924. So we will be approaching a thousand pretty soon and we have posted four main things, the CNCF Signetwork, 2020 Telecom, sorry, topics of interest for CNCF Signetwork. That's actually something we should mention about before or soon as well. We have 20, 20 telecom predictions. I'll have 20, 20 events lineup and the CFP live for ONS. The CIG, so the CNCF Signetwork is a little bit of background. The CNCF is going to, they're reorganizing the way they do their talk a little for sandbox projects. And so when a prospective sandbox project wants to join, usually they will give a talk at the talk. And the line for this is during absurdly long. And so in order to speed things up and in order to get better, I guess, visibility and tracking into what's going on, they're going to start asking the CIGs to start doing some of the vetting and some of the recommendations towards this. And so the CNCF networking CIG has been revived. It's been revived and what's going to happen is things related towards service mesh, networking, and proxying, etc. are all going to start their path in those CIGs. And so one of the things that will be good is to make sure that we get some level of representation there and also some diversity in that space as well. So I'm definitely going to participate in that particular path. I'm also going to seek people from the oddly community and some of the service mesh communities and so on to see if I can get them to go in so we can help with that particular process. But part of that is also going to be, we also want to hold when talks on a regular basis that are on interesting topics. So the involvement doesn't have to be, hey, you show up every two weeks. The involvement could also just be coming and give an interest. I have a brief point of confusion on the item we had about that in our meeting minutes. It said it meets every Tuesday at 11 a.m. and it says next call Thursday, January 16th. Hi, good catch. Let's double check the time schedule because it turns out there is an older SIG networking group that I've been starting sort of a year ago and we're still chasing down. Yeah. So I think Thursday is the right one. I believe, now that I'm thinking about it in my memory rather than relying on the words, I believe it's the same time slot as the Kubernetes networking SIG but on the days where they don't run Kubernetes number SIG. Okay. Yeah. So there's a different page. I'll fix the links that we have in the thing because the link that we have is to the old CNCF networking working group. We probably want to change that to the SIG. Yeah. Good catch. So let's double check the time. But yeah, it's definitely Thursday. It's early afternoon Pacific time. And so we'll double check the exact time for that. Cool. Okay. I won't go too long on for that, but that's something of interest that is changing in the CNCF itself. And with that, finally on the social media community team, we're going to have the prospectus and sponsored lineup and so on sent out. And once we get to go ahead on the contributor podcast, and Ed, we may be the ones blocking it because I think he's just waiting for approval from us if we haven't done it yet. That will go out soon. Okay. Oh, definitely we don't want to be blocking that. Yeah. The only reason I can think to ask them to hold off would be if we want to get it right over in a few comments. But yeah, the only thing I need to check is that they properly CC my PR minders who get very upset. They literally don't care at all as long as they know. And then once they, as long as they, but if they don't know, they get very upset. So yeah, no surprises. No surprises. They don't do very well with presents. So we do. With that, jump into the main agenda. We have breaking cloud tests into a separate repo. Ed, do you want to drive this part of the agenda? I think it's more Andre. Andre, you want to talk about it? Sorry. I was muted about the cloud test. Yeah, we have a separate repo for the cloud test. We have already through CCI, pass it for it. Yeah, it has just initial documentation. It will be updated in the next few days. And we have a pull request to switch to network service mesh to use this separate cloud test from this repository. I think I've merged that now. Oh, okay. It's nice. So we switch it to use a separate cloud test repo. Next steps will be, yeah. Good. Next steps will be to add more examples. So this tool will be more widely could be more widely used. Cool. Yeah, Ed. Awesome. My recommendation is once this is more user friendly that we spend a little bit of time on this specific topic so that other people know not only what it is, but how to properly use it. Yeah, yeah. This is my next steps to use a friendly regarding the presentation and just initial steps for anyone. Yeah. For those of you who are sort of newer to the community, when we were trying to do testing in our CI across all the many clouds, we discovered that's hard. And it was annoyingly hard. And so cloud test was to sort of make it easier because it turns out that you get all kinds of weird things like the varying time it takes for clusters to start and scheduling tests across clusters and a whole bunch of other things. So, you know, that got built in the network service mesh repo and then a bunch of people sort of looked at it and said, that's really neat. Could we use that? And that's part of why we're breaking it out. Yeah. When we say that we run six clusters per cloud for every single PR test, this is the driving thing. So, okay, so we have some information on a rough cut on the forwarder to the cross connect and path and simplified healing. So this looks like it's your handiwork it. So you have the work. We talked through all the stuff before the break, which is we realized we could simplify things a great deal if the forwarders stopped having their own API and just became essentially a particular kind of network service endpoint. It actually simplified a ton of stuff, not just the API. It made a whole bunch of interesting things possible. And there's a lot to be said for discovering that if you make your system simpler and more maintainable, that it's also more powerful. And so there was a general fondness towards that notion. And when we looked at that, we realized that we'd have to rethink how we were doing our healing stuff. And so we looked at how to revamp the healing stuff with the path concept. And it turned out that that made healing phenomenally simpler and more comprehensible as in the current rough cut code for healing is less than 150 lines of code. And it's generally reusable. So this was also kind of an awesome realization. And so we talked about this a couple of times before the break. People were generally friendly. And so over the break I hacked a bit on this. And so the result of that hacking is in PR 2050. And there's entirely too much there. It needs to be pulled into smaller pieces. But you have to get to a certain point before you figure out what all you're actually doing. So I pulled that out and I put that up so that everyone could take a look at sort of where what's there, the kinds of style with which it's going. Among other things, it'll make the existing code enormously more unit testable because all the individual pieces will be unit testable because essentially you end up building servers out of chains of things. And yes, I know about the conflicts. They'll be fixed. And so in looking at that and sort of talking about this with some folks, one of the things that occurred to us is one way to make this more accessible in general to folks. We've been talking for a long time about pulling pieces out of the network service mesh repo into their own repo because while the network service mesh repo from a line of code point of view is still relatively small, from a thematic point of view it's gotten kind of a little bit bloated. And so the sort of suggestion here was to essentially start with the new code that we're doing here and start an SDK repo and start an SDK dash VPP agent repo. And the reason those are two separate repos is VPP agent is a particular forwarder platform that you can use with that work service mesh. And while I happen to personally think it's an awesome choice, you don't want to make your general SDK stuff pull in dependencies from one particular platform among many, because we already have things like the kernel forwarder that is a different forwarder implementation. We've got people looking at others. And so you want to keep the sort of platform specific stuff platform specific. And so the general proposal is to start breaking the new code out into separate repos where you can go in a bit by bit and a little bit faster and start with the SDK and SDK VPP agent repos. Do people have thoughts questions? We wanted to sort of talk through this a little bit before we pull the trigger and started moving code in that direction. Yeah, we already shared thoughts with you, but maybe it's worth discussing this also here on the work group call. So yes, I do agree that maybe there are bits that you wouldn't want to pull in just for the sake of using our SDK. I mean, like we can see here that VPP agent is here. Maybe some other things are coming along with other stuff. Still, at least for me, currently it's a more or less, I don't know, integral part. So do you think there is a way that we can still keep an SDK in a single repo, but then have all these other subfolders as separate packages so that you... I mean, we absolutely could. Like one option would be, we could in fact do multi-module repos, but I think we've discovered that while it's awesome you could do multi-module repos and it certainly beats the hell out of having a single module set of dependencies for something as big as the network surface mesh repo has gotten. It does have challenges. So for example, there's a whole bunch of replacing cantations right now that if you want to use our SDK, you have to use to get it to work correctly because it's living in a multi-module repo. And so it's all trade-offs, quite honestly. And my general sentiment is, however we do it, I think we can all agree that we would like the platform neutral pieces to be in a different go module from the platform specific pieces when it comes to SDKs. Does that make sense to everybody? Is that something we all agree on? I didn't mean to presume. I think that's a good approach. Yeah, and then that's related to but not strictly the same question as whether we want the platform neutral and the platform specific pieces in different repos. Then of course we should be aware of the other challenges that because everything has ups and downs. So multiple repos is a bit challenging on the side of having multiple PRs merged at the same time. Otherwise you end up with broken software in general. Right. The place that this goes sideways is if the repos have bilateral cross dependencies between each other. Because the nice thing about Go is because you can pin Go to particular points on another repo when you're specifying your dependency. If for example SDK, SDK VPP agent depends on SDK, but SDK never depends on SDK VPP agent, which I think is the desirable behavior, then if you make changes in SDK and changes in SDK VPP agent, then the changes go into SDK first, then they go into SDK VPP agent, then the dependencies that are depending on both of them can update the points on the tree together. And we should be, it should be relatively smooth as such things go. Does that make sense? Yep. And come to think of it, Go does not permit. I don't think Go permits circular module dependency. Yeah. So literally, we can't hurt ourselves. We can't hurt ourselves in the worst way here. It doesn't mean we can't hurt ourselves, but the worst errors are precluded. So okay. Let's go. Okay. So I'll start breaking pieces off of the work that I've done and dropping them in this PRs for the SDK repo. The other nice thing about it is we literally have had some folks turn up and say things like, I'm looking for starter work. Or we even had someone who turned up who said, I'm programming these other languages and I have, but I don't know Go and I'm looking for starter work. And because the stuff that we're doing here in the SDKs is so small and self-contained, things like how about you go build the unit testing for this module is a very small chunk of work and very instructive if you're newer to the network service mesh or newer to go. Cool. Okay. Awesome. So I think we've sorted that out. On SRIOV, who brought this up? Because this is awesome. That was me. It was Ryan. I don't see Przemyk who I think has been doing a lot of work on this. So my question is really, I can help move this along. I hope you have the right folks here to give me some pointers. I know CI has kind of been an open question. I know there's a couple PRs out there. Probably be useful if we were able to actually pull down the code and run it. Yep. So hang on it a little bit. So out of all those things, does anyone know where we're at with this? And so I think it's hard to move this ahead. Yeah. So I can tell you what's in my brain about this, which doesn't make it real, but it does make it what's bouncing around in my head. Right. Which is that you remember how we talked about the forwarder to cross-connect network service transition, where instead of having something that speaks to the cross-connect API, you just have a network service that speaks to the same network service API that's providing the forwarder stuff. Yep. So effectively in that world, you would want to take the work that's been done for SRIOV and transition from using the cross-connect API to using the network service API. And if you'd like, what I can actually do with that is we can start by, I can point you to how the forwarder is coming together right now for the existing hand-to-hand forwarder, which hopefully will give you a pretty good notion of kind of what that's looking like. And then you can tell me if that looks same to you. And we can talk through a bit, what you think needs to happen to transition the good work that Zemmick did in that direction. Does that make sense? Sure. Yeah. And does anyone know if he's still working on these PRs that are outstanding or what? I guess I don't know who's working on what. Yeah. Usually my experience with Zemmick has been, he seems to be closer to the embedded guys. I don't know if you've ever worked with embedded for people. They tend to be kind of quiet and introverted. And so you may just want to ping him on Slack. Okay. Right. And say, hey, you feel free to ping him on Slack via direct message or pulling him into the NSM Dev channel or whatever. Because I think what you're basically saying is, this has me really excited. How do I help? Is that more or less your message? Exactly. This is something I'd like to see kind of come to be a thing. Oh, yeah. Lots of people are super excited about this. There was one unfortunate timing issue, which was we had a refactoring of, so we mentioned about the refactoring of the data of the boarding elements of the API. So we worked out that we can change these things into internet work service. So you don't ask specifically for like an SROV thing. You can get them chained in as part, as a service themselves. And so the hardware is now represented as a network service once these patches are merged in. And so this simplifies things a lot down the chain, because the previous chain was quite complex. And then we had to work out, well, how do we select which one do we want to run and a whole range of other issues? And many of those issues go away. So it'd be absolutely fantastic to, like once those patches land with the dead was talking about earlier, to make sure that this stuff gets transitioned nicely towards that. And so if you want to be helpful to them, I would recommend offering help along those lines if you have the cycles to do so. Okay. Sorry, I wanted to add that I have reached to Przemyk, I think. Before the holidays about this talk that we're preparing, we will be preparing, of course, if accepted for KubeCon. So I can say that he's pretty responsive and he's open to any feedback. And my experience with him has been absolutely fantastic in terms of responsiveness. It's just, and this has been my experience, engineers differ in their propensity to attend meetings. And he does not seem to be a meeting guy. So if you sort of look around the meeting and say, where's Zemyk? It's not surprising to me. He's still pretty responsible. Yeah, that's fine. I guess it sounds like I just need to reach out to him. So that's the big thing. And same thing with Radislav, because I think Radislav is also working on this as well. Yeah, exactly. I was going to mention that I'm trying to prepare the environment on my side so I can try and evaluate his PR and so on. So I'll be more than, yeah, it will be more than awesome to join this. So one other area you can help out as well is this stuff will eventually need to land in our CI and ensure that it works in testing. So if you have any operational experience with SRIOV, helping us with CI on that in order to set that up and make sure it runs would also be extremely helpful. And we have credits through packet.net, which has ConnectX from Melanox and also has some Intel SRIOV-enabled chips. So we can make sure that we land the right instance type if that's something you're interested in helping us. It's also true. I know you mentioned Radislav, you're trying to get an environment going. I mean, one of the simpler things we might do is just arrange for you and Ryan to have access to a couple of packet boxes that you can play with that would have it available. That would be awesome for me. I have access to a lot of Intel hardware that I've been playing with for other things, but I'm having a hard time hip-checking people that have access to the Melanox hardware out of the way. I'm sure we can arrange for that for you and for Radislav. Does that something that will help you as well, Radislav? Yeah, for sure, yeah. Okay. I mean, normal courtesy applies. Try and pick the smallest box that meets your needs. Try and be in a position to turn the box off when you're not using it, that kind of stuff, because it's just courtesy. This is also generously donated by packet. We don't want to burn their resources unnecessarily, but we also do want to put them to good use. And what you guys are doing is definitely stuff that would be good for them and good for everyone else. If you can send me your email addresses that you would like to use for your packet.net account, send them my way and I'll make sure that you gain access to the project. Yep. Easiest way to get a hold of me is through Slack. And if we do it in packet, I guess it will be helpful for the CI as well, because it would already be there. Yeah, that's exactly the thought, right, is doing it in packet, especially since one of the pieces you're going to want to do is, we're not just plumbing a random SOV interface into a pod. Yes, we do that, but we're not just doing that. Network service mesh also can let you have some sort of network service endpoint that will muck with your physical network to give you the service you want. And packet.net has a very simple API for the networking they present to their services that would let you do something like assign a VLAN kind of thing and connect VLANs to things. So you could even do like a toy network service endpoint of that sort against packet. And that would not only be a great demo, but it also is immediately useful for the CNF Testament folks. Yeah, also, Nicolai pointed out something interesting. I think it's Nicolai's stream of acquiring, Equinix acquiring packet. So interesting news. Yeah, it looks hot off the press. It looks like a good thing as far as I can tell. You know, it's sort of the peanut butter and chocolate, two great companies that taste great about it together. Cool. So the last part that we have, or are we are we done with SROV or is there anything else we want to discuss on that? I think we touched on everything I wanted to touch on. Okay, cool. I'm happy to move on. Thank you very much. Let's move on to Michael's as the NSM example with arbitring network topologies. So you have work. Yeah, we do. Yeah, cool. Do you want to share your screen? I can drop the sharing. Just I can try and share. Yeah, I'm on a very dodgy internet connection. So I can I can show things for you if you guide me, whatever. So I guess, by the way, we can start from this. So yeah, so it's nice to meet everyone. It's my first time. It's cool. So excited to be here. Thank you. So I come from a very traditional kind of networking background and kind of one of the things that we tend to do a lot is to to lab some of the designs that we're about to implement in the physical network. And that usually involves like building some network topologies out of virtual devices and doing this for a relatively small network is easy for a medium size, a large one is almost like not impossible, but it's a it's a big task. And and so quite some time ago, I came across this problem of like trying to simulate large scale to political physical virtual network topologies. And I sort of started looking at Kubernetes and back in the day, I decided to go down the path of writing my own CNI plugin because back then the CNI did not properly support multiple interfaces, etc. etc. So long story short, I, I presented this during the OSX or Open Networking Summit in Europe. And that's where I met Nikolai and somehow it's transpired that the work that the Nikolai was presenting was almost like it was yeah, it was so close to whatever I was supposed to present that it wasn't even funny. So because I was, they basically in their presentation, they presented half of my presentation. So I had to kind of repeat them. But anyway, the back then the Nikolai said that let's cooperate and build something based on an SM. So the idea is that I have some virtual network device, which could be in this particular case, and this is example is Quagga built inside a Alpine container. But in real life, it could be anything like all the big vendors, Cisco, Juniper, Arista, they'll have virtual devices that can run inside either a VM or a container. And the idea is we wrap them inside a container format and then somehow distribute them or use Kubernetes scheduler to distribute them across multiple nodes. And then the main point is interconnecting them the same way they would be connected in the real network. And in this particular case, I'm using NSM and I'm slightly abusing all the concepts of NSM. So I'm kind of like the client and endpoint, they almost lose their meaning in a way so that every link contains a client and endpoint. And it can be arbitrarily assigned to either end of the link. So I need to be very... That's actually in some sense fine and expected. And it's sort of like you mentioned you're a deep networking guy. Do you remember the Lisp guys when they realized that nobody cares about ingress tunneling rotors or egress tunneling rotors is just XTRs? Yeah, yeah. You've kind of come right out of that place. And that's cool. Yeah, yeah. So, yeah. So yeah, for me, the main point was to build those point-to-point links. Doesn't matter how they build. In this case, I use kernel folder, which just connects the two types of link, vxlan or vis link. And the main idea is, yeah, I have a certain topology that has a certain layout. And I have some configuration that I want to apply to each one of those devices. And then at the end result, I want to get full-blown topology that I can interact with troubleshoot, debug, try new designs, whatnot. So this particular example just shows five routers or routers connected like this. And they each run OSPF on all their links. And they each have a unique look back. And the testing that I do in the end is just try to peek from router 5, try to ping every other router in the topology. So, yeah. So I set about abusing NSE and NSE. I'm not using the standard admission webhook basically statically injecting NSE as a sidecar. With all the correct labels that I need. And the same side for NSE. But at a high level, this is a new example. But then what I also did is I, a couple of years ago, when I first did it with my own C9 plugin, I've written a kind of a high-level topology orchestrator that it's basically orchestrator is a very good word for it. It's just like a basic Python script, 400 lines. But what it does is effectively takes the topology in a very defined and a very concise short way. So it's literally, I don't know, something that's if you could click on the network service mesh, I believe. And this will and click on the NSM5 node YAML example. So that's a similar kind of example to what we've seen before. That's the whole definition of the topology. That's how it's supposed to be connected. Then you can also optionally supply configuration files that are going to get assigned or injected into each container as a startup config, which is where you define your SPF config and whatnot. And then you basically run one command that says create. And it takes that file, the config files, the config files get injected as config maps into communities and then as subsequently as volumes. And at the same time, it creates all of the manifests that are required for NSM with all the sidecars, NSC, NSC sidecars with all the correct labels, expanding both ends of the link, blah, blah, blah. And including injecting the actual network service itself. So basically it does a lot of the heavy lifting. So all you have to worry about, me particularly have to worry about is how it looks like, not the funny bits like the annotations and labels and the environment variables. So yeah, so I haven't done a lot of testing. To be honest, I've just finished it at the end of last week when I was coming back from Pete Tomoff, had some time to work on that. So there's still maybe some bugs, but based on my preliminary testing, it seems to be working fine. It produces the right manifests and it does what I want it to do. So probably do some more testing once I get to the actual real life use cases. But for now, this is it. If I may just add a couple of updates here. So first of all, I really, really, really like this moment that I saw Michael's presentation and I was immediately, we should have this in. So I'm really glad that we finally have it. Like this whole discussion started back in September. So a couple of months later, we already have it. The other thing that actually is really interesting for me is that this is the very first time that we have some tool that runs on like uses NSM as an API, like it generates configuration files for NSM. So let's say that the first controller for that leverages NSM API, at least to the best of my knowledge. I was surprised by many things on the last NSM console. Who knows, maybe on the next NSF there will be a lot of other interesting stuff going on. And what I wanted to do, so what we have today is all these files are kind of statically generated. But I would really like to be able to make this in the following up PRs so that all these things are generated at runtime. And then eventually I was, I don't know why I was imagining that I will be able to kind of generate even these files. So I was saying, I want a 10 node topology. So if this file is generated, then it goes to Kate's top, it generates all the Kubernetes. This is all super exciting stuff. I mean, because two things strike me. The first one is, you know, it's good to have outside people using stuff. And so one of the things I'm curious about is how is your experience using NSM, particularly using NSM for something that wasn't, you know, wasn't exactly what it had been envisioned for when it started. So to be honest, it works fine. I came across a transient bug that I knew ahead of time existed. And that bug is when an interface gets created in, sorry, when an interface for the pod gets created first, it gets created in an outside of a namespace, in host-wise-namespace. And then it gets moved to a pod's namespace. So when you have two pods coming up at the same time that have the same interfaces, they may have the interfaces created in the host-namespace at the same time with the same name. And the first or the last call to create an interface is going to fail saying file exists. And then it sort of triggers off of like, because it happens very fast, I haven't had time to actually build a full story of what happens after that. It's sort of like, it goes away after some time. And in another library that used to use before when I was doing it using my own CNI plugin, they used to work around that by first generating a random interface name when they created in the host-wise-namespace, and then moving it to the network namespace and then changing it to the actual... That's actually, if you could file that issue, that would be fantastic. Because that's something we should be able to get fixed. I'll try to collect enough in our vlogs and information about that before I log in. It's a fantastic catch and it's a good example of how many eyes make all books shallow. Because we happen to... We don't happen to hit that particularly, but it's immediately obvious that it's a real problem when you mention it. And so this is super good. But yeah, we can definitely get that fixed and you're right, that is a good thing to get fixed. Yeah, I mean, the other thing that's super nice about this is this was absolutely not what we designed for. That said, I've always maintained that the measure of architecture is not whether you meet your requirements. It's whether you can easily be adapted to meet new requirements as they arise that you never thought of. And it sounds like this has actually gone really well. And that's heartening. It kind of means we maybe are on the right track. Yeah, it's a lower level of abstraction to allow other people to build something that wasn't designed to build in the first place, to do in the first place. Yeah, in that case, in that sense, it's good. Yeah, so the only thing is that fog and the fact that the documentation was a bit sparse in some places, like contrary to the notion of labels, I still don't fully understand the client and the labels and how they passed around and their meaning. But we had a whole discussion on this back over at NSM con and the consensus for most people was hop online, ask a question, get a response two minutes later. So yeah, there was a funny point. We had a panel after NSM con where the speakers sort of spoke up about their experiences. And we had two conflicting opinions. And the first opinion was, well, documentation is a little bit sparse, and that's hard. The second opinion was, oh, that's fine. Just jump on Slack. You'll get your answer in two minutes. It was from two different people who had had experiences of hitting something that wasn't well documented. And I think you shouldn't have to jump on Slack. So I'm hugely in favor of doing, you know, I'm hugely in favor of getting better documentation. But it is a good thing that the Slack community helps. Yeah, definitely. Definitely. Okay. Is this all my coming? Do you want to? Yeah, that's it. I guess we can discuss offline. So if there are any extra future steps that we want to take and generating this at runtime and all that. So, yep. Great. Now that's it. Thank you. Thank you very much for presenting this. Definitely. What is it all? Cool. Yeah. And if you run into any problems, like definitely feel free to reach out to us. We'll do our best to help you. And with that, are there any, we have five minutes left on the agenda. Is there anything anyone else would like to discuss before we close it up? Okay. Well, with that left, I want to thank everyone for your time. And we will see you all again at the same time next week. Y'all have a good day now. Thank you. I'll see you. Bye. Bye. Cheers.