 So we'll sort of construct the agenda on the fly, which we always sort of do anyway, but it's gonna be a little bit more on the fly this week than usual. We also have events. So we've got good news on the KubeCon front. Yeah, we also have a good news on the ONSEU front. Not for this project, I guess. Okay, sorry. No, that's fine. I mean, I would love to see feedback from folks about the ONSEU experience. Very good. Ahmed Kyle. Yep. By the way, for folks who just, let me stick in the, let me try and stick in the chat. If I can find the chat, I do not seem to have access to a chat in Zoom anymore. The link to the meeting minutes and let me actually bring up and share the meeting minutes really quickly and we'll just walk through them live. One second, will I clear things up on my Chrome browser? Traditionally speaking, we as a community will tend to edit the meeting minutes live anyway. So let me get the share going. I see share, Google Chrome, there we go. Awesome. Can everyone see the Google Chrome? Yes. Cool. So the good news is we do have some network service mesh and we have two talks that are happening at KubeCon. Let me get them linked in here real quick. The CFP closed for KubeCon, correct? Right, but we do have two talks for KubeCon on network service mesh on the schedule. So let me go ahead and get the links to those going. So thanks folks for going ahead and putting yourself onto the attendees list. That helps a lot. People are also often wondering, oh, hey, who all is involved in this and that's a good way for them to see me. So we wanted to also add to the agenda, I think, Machik, you wanted to add an item on about the DNF, CNF testing and benchmarking stuff. I think that was one of the ones we didn't quite get to. That was actually not just me, but also Mikhail, who is now here. Yes. So this is correct. I think I just heard Mikhail's voice, so. I am here. He does seem to be... Michael or Mikhail, how do you pronounce your name? You're Swedish, so it's Mikhail, correct? I'm Danish, so yes, the first one, Mikhail. Danish, okay, Mikhail. So it's not Michael, it's Mikhail. Okay, thank you. You can call me Michael or Mike, I don't mind. No, I prefer to go native. So I'm going to ask you, Mikhail, thank you. All right. And thank you for bringing that up, Machik. I do try and get people's names right, so I'm great and I'm not usually very good at it. So I'm glad that you took the time to figure that out. So Mikhail, excellent, cool. So we're a little bit light this week because I think a bunch of people are still at or in transit from ONS Europe. So let's go ahead and dive right in. Is there anything else that folks would like to see in the agenda that we don't have here currently? If you don't mind, I'd like to put the action item review kind of at the end, because I think some of the more important things have bubbled up to the top of it. And getting their own items here. Is that fine with everyone? Sure. The only thing I have, sorry, I need to drop off at half past. So if you could cover the lab stuff or the CMFVNF stuff, before then I would appreciate, thank you. That's awesome. Hey Ed, real quick, we've done some very brief arm waving about a possible demo at QCon an NSM demo of some sort, format. Yep. Good point to add to the QCon Seattle discussion. So anything else that folks feel we need to add to the agenda? By the way, folks should feel free to not only add items to the agenda live in the doc, but also to help in the process of taking notes. It is really useful if you do that. Cool. So, so getting down to events. So I know there was a bunch of network service stuff that was going on this week at ONS EU. Machik, do you want to comment on any of that or? Well, I, Jals and me were there. When Kyle and... What's going on after all? You're a little bit of background noise there, so if you could use it. Yeah. So, Frederick and Kyle presented in the sem, I've been to one of the talk that we're doing the dialogue with your Spiderman and some of the slides. I think it went very well. There was a lot of interaction with the room and huge amount of interest. I personally enjoyed it and was glued to the presenters and the screen and content. So I like that. I also know that Frederick and Kyle run some site workshops, but I don't know, I don't have any feedback as I didn't attend those. I did attend a happy hour together with Jals and we had some good fun there. So that's my feedback. I don't know, Jals, do you have any more to add from your experience on ONS? No, I think it was good. Yeah, I think it was good. I mean, I think, yeah, we probably want to... So Ed, we were probably going to run those same slides when we do the thing in Paris in a few weeks. It seems like everybody loves that. Loves your animated spider. Yeah. Yeah, that story's been well received. It is kind of a crazy impressive number of slides, but they go very, very fast because they aren't actually... Yeah, exactly. But yeah, people tend to identify with the protagonist and that's always a sign of good literature. Cool. Awesome. So I'm glad that that went well and hopefully we'll hear a little bit more when Kyle and Frederick make it back. So, and then in terms of events we have coming up, the next big one is CubeCon Seattle, which is December 10th through the 13th. Actually, it's technically the 11th through the 13th, but there are some events that are happening as co-located events in the town that are probably cool to go to as well. So at CubeCon Seattle, we do have two network service mesh talks that have been put on the schedule. One is the internet network service mesh and the other is the network service mesh deep dive. So, if you guys could, we would love to see you at those talks. We would also love it if you could promote those talks to other people who might be interested in the network service mesh. I think that would be good. And then I think we've got Chris Metz sort of pointing out that we need to come up with an NS network service mesh demo and suggesting we do things around podcasts and blogs leading up to CubeCon as well, which I think is a good set of suggestions. So I guess part of my question to the room would be, what do we think is a community that we would like to be able to demo at CubeCon in terms of network service mesh? Actually, are you sure this is the right question to ask, Ed? I'm open to this not being the right question. Shouldn't the question be, what do we expect to be working? I don't think it does make sense to hack some throwaway code for just the demo. Talking to Frederick and Kyle, they were updating that the study progress is being made, but I think the question should be really, what is expected to be working? And based on that, work out the demo scenario per Chris's request. That would be at least my suggestion. That's an excellent suggestion. I tend to think about things in terms of priorities because my experience has been, so when you set definitive goals, saying we will do a X by Y date, you tend to not do more than X. But when you set a list of priorities, these are the priority list of things we're working on and we need to at least get X working by Y date. Then you're much more likely to overshoot your goals. So I guess maybe the right question is, what are the priority of things that we would like to show at network service mesh? And then we can sort of see what we can actually do to get from here to there. Does that make more sense? Yeah, just prefacing those remarks. So I guess this demo would be some sort of portfolio of material that we'd want to expose to the community. So, Machik, to your point, even if it is a hack, there could be at least some things we show, existing in the cluster like the NSM agent, whatever sort of calls might be established or calls set up to be able to program the cross-connect. So I think that contributes to not only the KubeCon presentations, the website, but just sort of gives the audience something else to look at and at least picture in their minds. And walking away, we want them to think that, hey, networking is happening. And again, it's happening in the cloud. This is really cool solution. Yeah, I think the tagline I used at OSS was network service mesh, making networking sexy again. Which is a good aspirational goal. But yeah, we gotta work with some element of running code. So I think we would need to quickly determine what we might have working by then or before then so that we could start to build this thing. So maybe what we should do is we sort of put the stake on that. We should think about this and revisit this next week and see what conclusions we've sort of come to as a group between now and then. We have a lot of the community that's currently out this week. Before you leave this topic, Ed, I submitted, there's also a FIDO day at KubeCon. I don't know whether it'll be accepted or not, but I submitted something to the FIDO about constructing a simple example, a layer two connection only using a network service mesh. Of course, some of the work that's going on right now and sort of in defining the NSM data plane protocol might be important for that because that seems to be the mechanism that we're converging on. But I hope to have some code actually written and that's demonstrable, at least in an isolated environment. Very cool, very cool. So cool. So let's mold some of this stuff on what we want to bring at KubeCon and we'll talk about it a little next week and we can talk about it in the intervening time. Everybody okay with moving on to the sort of BNFC and F-Testing and Benchmarking? Cool, Mikhail, Moczek, you guys are up. All right, so Mikhail, we exchanged some emails on where things are, things are. I understand that you guys are using packet.net or something similar, the guys who are hiring or renting their physical service. I keep forgetting their domain name. Taylor, Watson, and Lucina, brief me on where you are. We've been actually chatting every day when announced them and I understand you got the VMs and containers working but they didn't know whether you are actually able to run any data plane tests yet. So that was my question. I have been running some basic data plane tests so far using NB Bench, which connects to T-Rex. And I've pretty much so far I've been focusing on 64-byte packets just to try and make sure we don't bottleneck on the network interfaces since I guess we only have 10 gig connections available. Yeah, I understand that. So what we've done is we've scaled down the number of cores being used for the VMs and containers to pretty much the minimum that we can run with. Okay, and what sort of rates are you getting? Are they comparable with what we are measuring? And if you get a server at PEC and you get one that has, oh, just this. You're breaking up. We're losing your bet. So do you have any numbers to share, any results, anything at all? Yes, you can hear me now. I think my connection just won't hear you now. All right, yeah, so we have the numbers and I guess for just a single chain, I guess we're reaching 8.34 million packets per second for the VM and for the, or no, let me get the actual numbers because I guess we scaled it down a bit so it might be less. By single chain here, you're talking PVP, correct? Yes, yes. And this is, a V-switch is what? Is it VVP or obviously VTK? VPP. V-switch is VPP. And then VM is running test-PMD or what? Also VPP. Also VPP, okay. And it's a VHOS user vertio for VM, yes? Yes. Okay. Okay, I'm just updating the notes, sorry. Okay. Okay, and you also got the same for container? Yeah, yes. The only difference is we're using MIM to interface. Sure, and so what are the numbers you're observing? You said 8.3 million PPS? Let me get the actual most recent ones I have available. Just a second, I should have them open. And what is the packet loss ratio that you are measuring it at? I'm using RSS measurements, or MRR measurements, sorry. MRR, MRR is defined by CCID? I'm not sure if it's CCID. I imagine there might be a... What do you mean by MRR? Is it maximum received, right? Yes. That is CCID, nobody else defined the term, we'll be defining it in an idea of draft. Yeah, I think I heard it from Ed at some point, so that might have been how I get to it. That's okay, that's okay. All right. That's for the VNF, correct? That's for both of them. But I guess I'm not even sure if I'm compliant to it completely, to be honest. So what I'm doing right now is we're just pushing the line rate and then I've done measurements at lower rates as well. Yeah, because as you probably know, MRR is very forgiving for the computer because we're running the computer at the... Basically, we don't care about PLR, packet loss rate. So it is good indicative measure. However, for measuring memory interface efficiency, we probably would like to have like zero packet loss. Yeah. Or some tolerance. So we should measure both. That's what we're doing in the FDIO. Yeah. But people don't really care about MRR. It's more of the developers. I know, but it's also just to get some... Yes, yes, yes, yes, yes. Do you know what is the computer you're running it on? One of the guys listed on this packet.net. Is it the Skylake Gold that they listing as available or something else? Yes, it's a Gold 5120. 5120, and this is you're running it in hyperthreading. Can you run two sibling threads? Okay, yes. Which Nix? Milanox ConnectX 4s. And we only have one port available, since the other port is used for external connectivity. ConnectX 4s, you said, yes. Yes. Okay. Are they going for the switch, right? I would imagine so, yeah, they do. Okay, so you got 8.3 MPPS for VM. What about container? Actually, I guess just to get the correct numbers, I guess it's 8.13 million packets per second for VNF. 8.13, and for a container? 12.24. 12.24, okay, all right. And also MRR, yes? Yes. And this is the single chain PCP. Yes, okay. Okay, what's the clock for the CPU? Let me just get it, 2.2. And you have a turboboost disabled and shit like that, yes? I'm actually not sure, no, it looks like turboboost is actually enabled on here. Ah, then it's bullshit, and then we don't know. You know how it works, right? At least I see it running at 2.8 now, so something has gone on. Okay, all right. So a good news is, you've got to the point where you're making measurements. You have a T-Rex scripts, you're only gonna have your bench. So this is just now tunic, right? Yeah, and I've actually already been doing a bit of that one just to get the highest possible numbers given like the setup that we have. Perfect, so actually turboboost on is a good number because we actually did the reports, turboboost on a result in Copenhagen EU and we have provided another read with Giles on Wednesday in Amsterdam. As you can see, hopefully the slides got posted. We sent them the slides earlier a few hours ago. Yeah, so this is cool. We just need to start moving on. So how do we, shall we have a breakout session maybe next week to walk through there for the detail? I know that Tyler was very interested to make progress. Yeah, we can definitely. Dan Kohn also wants, he's paying attention because he wants to show some of it in his keynote. Yeah, well, I guess, and I can... Yeah, sorry, go ahead. I just, I can add over the last couple of days as well, I actually got the multi-chain CNF working as well. So now I guess I can use six CNFs in chain on Numa Zero. Six CNFs in chain, but are you doing horizontal MEMFs or for the V-switch? Horizontal. Horizontal, that's the biggest impact. No, no, no, no, mind it, vertical, vertical. No, it depends what you mean by horizontal or vertical in this case. By horizontal, I mean where the two containers talk to each other versus instead of talking through the switch. They talk to each other. Okay, so how many do you have? Up to six. Single chain. So you say PCCCCCCCC. Yeah. Yes. Or we can call it from one to six. Yes? Yeah. All right, I guess we have the one. So what this does is just two to six and since I have a separate script for doing this. In fact, I think the proper notation if we use regExp is like this. So you basically have a single chain PC and then you have one times as many. So that's cool. Yeah, so what numbers are you seeing then, Michael? Let me just find them. So looking at, let's say a million packets per second at the 10 gig connection, I guess it starts out with two CNFs at 11.5, then 11.2, then 9.94, then 9.99, then 9.84. So there is a bit of drop and a bit of variation, but it's 9.94, what was the next one? 9.98, now 9.99 actually 10, if you round it. 9.99, what's the next one? 9.84. And the next one? Oh, I'm ahead of you, I guess. So the first one was 11.51, then 11.26. Yeah, 9.99, 9.94. And no, then the third one is 9.94, then 9.99, and then 9.84. Okay, 11.26, 9.99, 9.94. Let me see if I can post them in chat, that might be easier. Well, if you are watching what I'm typing. Yeah, but I only have like the one screen, so I'm kind of jumping between everything. I posted the numbers now in the chat. Yeah, so do you have numbers for the multi-chain VNF as well? No, I'm just working on setting that one up. Since the way we're doing it right now, there's a bit of hacking going on to get this to work in through Vagrant. So I'm just trying to find a way to get these up and running as well. Okay, cool. Okay, so I'm wondering why you see a drop. Yeah, that kind of surprises me as well. This is based on, I've done five iterations of each run. And this is also MRR, in every case, yes? Yes, I have numbers for lower speeds as well. So if anything, it might be that once we get to this, once we start having a lot of drops, then it impacts the performance. No, no, no, it's not that. I know what it is. Let's take it to flight. Sure, sure. It's a configuration. So it sounds like there is a desire to put together a breakout in the next week for folks to see this. Michael, quick question because I need to drop off. It does Monday work for you. What time is it when you're in? I'm sitting in Arizona, so that is... Arizona is fine. It's currently, you are on Pacific. So... It's 8.30 here, a.m. Yeah, so you are currently in Pacific until they move to the wintertime. The don'ts, they don't even change here. I guess they're on Pacific. I know that you don't. You don't move if you see very wise. Everybody else does, which is stupid. I'll propose something on Monday morning your time. Okay? Yeah, that's fine. Thank you. Thank you, guys. This has actually been really helpful. This is going to be very key for a lot of folks who are looking at developing and adopting. Now we're starting for Smash for Cloud Native NFE. And so these kinds of compelling reasons are going to be really, really helpful as we move forward. Yeah, I need to drop off. Thanks very much, guys. Thank you. Thank you. Frederick, guys, you made it. Excellent. Frederick, are you here? Yes, multiple mute buttons. Hello. All the mute buttons are belonging to you. So real quick, do you want to give us a quick readout on the ONSEU experience? Sure. So I ended up giving a few talks on it in different venues. I ended up having a conversation with some of the people from Ericsson and Luna over at the DDF. They're both interested in helping out. And this was the open daylight DDF? Yeah, that's correct. And that was right before ONS. And so, Anil has said that in the next few days, so he's probably traveling right now, but he's said that in the next few days he was going to reach out and he said he wants to start contributing coach to the project. So for me, that's a huge win. Let's see. One of the things that I pitched to the ODL team who's focusing on the COE project is that the team end up building out not only the COE CNI plug-in itself, but that they also focus on two other things. Number one is providing a library written in Go, which is sort of like DPP agent that you can control DPP with it and write something similar so that you can do that with the ODL side. But the more important one is that they also create some form of either a network service endpoint that would use this library or to create some form of ENSM that they could then use to lift various features from ODL. So I've gotten them to start thinking about what such a thing will look like. One of the things that if you get asked about this later on, one of the things that I cautioned them on was to not mix the CNI stuff and the NSM stuff together because there was some, oh, we can use, we can merge them all into one like super project and so on. And so I was trying to display them. Module learning is good. Yeah. So the actual sessions themselves, so Kyle and I, we ended up giving a session on the network service mesh. We got a really good turnout. I feel bad for some of the other sessions because they were probably empty because of us. How, do we have any estimate of the head count for the turnout or? I didn't take an estimate of it. So maybe someone else who was on the call had the better, because I was more focused on giving the talk than I was counting people. Fair, totally fair. I ended up having conversations as well with some of the people, actually with the person from Intel and I think it was Ivan Coughlin. And so we're gonna have discussions on how we can better position like, what he wants is he wants guidance on like, when should you use network service mesh? When should you pull multis or that kind of stuff? So I'll help him write that guidance up. So, because one of the things that's happening that I want to be really careful with is that there's a lot of misconceptions as to where network service mesh is. And so some of the people in the multis community and so on are a little bit apprehensive of our project. And so rather than let things continue on and just let it evolve, the two of us are gonna work to try to work out like where the best place in terms of positioning is so that people can be more confident in where the lines fall. Let's see, what else? I ended up having, no, there's something else that I'm forgetting. A few things I'm probably forgetting. I ended up having a talk with some of the, one of the Swiss telecoms, might have been Swiss telecom itself. And so they're interested in some of the network service mesh stuff as well. So I'm gonna see if I can get them to start giving us some of their use cases where they think it might be useful and to help them further understand it. So I connected with one of them through another avenue. So worst case scenario, if I can't get them onto this meeting or onto the mailing list, I'll see if I can at least pull the requirements myself and then transcribe them with their permission to our community. Yeah, there was definitely a lot of interest. There's certainly interest on it from the CNCF itself. I had a lot of trouble getting a high hold of Dancom. So I wasn't able to get a hold of him specifically. So I'm gonna see if I can follow up with him maybe in a week or so and now that things are, you know, people finish traveling and things start to settle down so that we can work out like, what type of messaging does he wanna provide or like, how does he wanna proceed with network service mesh as well? And so one of the things that they're asking us to do, both CNCF and specifically the Linux Foundation Networking, that it's not part of NSM directly, but I think it's something we can help out a lot out on, is that we help provide guidance on what a CNF is in the first place. And we had spoken about this several times in the past several weeks and so on. So there's a continuation of that, but effectively they want help in defining what it is and help in trying to work with Telcos and DNF providers who wanna move over to CNFs. And if we're the ones who provide that guidance and then we can make sure that, that it's like I said, independent of NSM, but we can make sure that they don't fall into the same pitfalls that we saw application developers do when they were starting to containerize their workloads when doctors came out. If there's anything else I forgot, I'm sure I'll think about it right after the meeting and I'll send it over an email. How did the happy hour and the unaccounted time? The happy hour didn't have as many people as I was expecting. And I think one of the reasons was that, not because there was a lack of interest, but we had, it was bad timing with the, with there was a general, I guess you would say, booth crawl and that kind of stuff that ended up going later in the day. And then when you combine that with, there was no good venue that we could really find that was close to the conference center. And so we ended up going to one of the hotels, but the hotels were a little bit of a walk. And so I think people were like, I don't wanna go on a walk. So we got some people who were interested in it, but numbers weren't as high as I was hoping. So. Cool. Also, I appreciate the detailed report. That's all sounds like very, very good news. But yeah, I mean, the number one thing that, I guess the number one feedback that I kept hearing over and over again is we have to get some form of a proof of concept out. We have to get something running and showing because right now people cannot pick our work in order to show, I do wanna build proof of concepts for other things as well. And they wanna pull us in, but they can't pull us in because we're not ready yet. So we have to get ready. Totally fair. Cool. So that actually provides a nice segue into sort of one of the next things that we have here, which is the architect and progress work. So there's been a lot of conversation going back and forth trying to actually write down clean architecture for a lot of these things. And particularly so we can pin down some of the APIs. So the sort of the latest PRI back going on this and the area that we're currently focusing most on is the API, you'll basically is the within Kubernetes. Can everyone see the share by the way? Can everyone see the share? Can anyone see? Okay, good. Yes, yes. So within Kubernetes, there's a data plane between the network service manager and whatever your data plane is. And this is basically how the network service manager asks for cross connects from whatever data player data planes are present on the system. And so we've been trying to define this sort of NSM to NSM data plane API. In other words, what does the NSM say to the NSM data plane? And this has got things like create cross connect, update cross connect, delete cross connect, listen watch cross connects, which is a pattern which basically says, look, give me the status about the cross connects you've got. And then listen watch mechanisms which we'll get to mechanisms in just a second. But mechanisms are sort of like the things you can support. Like I am a data plane that can do kernel interfaces and VXLan but there's the only mechanisms I support. So if you need someone to give you cross connects for MIMIF and SRV6, I can't help you, right? And so listen watch mechanisms allows you to send information from the data plane up to the NSM about the mechanisms. And then the other one that we define is a simple registration for the networks. This is the sort of way that the network service mesh data plane talks to the network service manager and it just has a registration that sort of says, hey, I'm a data plane. This is how you can phone me back. And then we've been working through sort of sorting out these mechanisms as well. Where a mechanism is one of either a remote mechanism or a local mechanism. And when you look at local mechanisms, you get things like a type and currently we've got four types that we've identified so far, kernel interfaces and MIMIF and VHOS user. And then we've got a map that's a bunch of labels. And the way we're currently thinking is these labels could express preferences or constraints or communicate the final values of a parameter. So for a kernel interface, for example, you might have a label name equals ETH2. And so if I'm a pod coming up, wanting to be connected to a network service, I might say, look, among my preferred list of local mechanisms, I would prefer a kernel interface and I would prefer that it be named ETH2. And then when you get back from the data plane, would be the mechanism that was actually sorted out. And then we've also got remote mechanisms defined. They sort of first get defined when we're looking at how NSMs communicate with each other. And the remote mechanisms are sort of very similar. They've got a type and a bunch of labels. The kinds of things you communicate in those labels would be somewhat different. So we sort of use an example here of VXLan, right? So you would imagine that you would have source IP, source port, desktop IP, desktop port and VNI. And so when one NSM sends a remote connection request to another NSM, it would specify source IP, source port and a list of acceptable VNIs, probably expresses ranges. And then when the NSM2 comes back, it would still send back a source port, a desktop port, a source port and IP, but it also sends the desktop IP and port and the particular VNI that it picked as labels. Makes sense so far to folks. Don't speak at once. Yeah, this makes sense to me. Cool. And so the part of this is there's a lot of conversation happening on IRC back and forth because Sergey is trying to produce code and God bless him. He's chasing moving architecture, which is incredibly brave. But it's also productive because he keeps sort of poking things back and saying, hey, why is this so complicated? And so things get simpler. Do you have any comments you wanna make Sergey? Well, basically just one. I mean, if it's all possible, I would really, really prefer to keep an NSM code away from being mechanism knowledgeable. So there shouldn't be any code in the NSM for any type of mechanism. So it's just like a bridge. I would consider it as a bridge. Doesn't matter if it's a Ferrari runs over the bridge or somebody on the donkey crossing the river. I mean, I don't care. Yeah. I like that metaphor a lot. So, Nona, you mentioned this idea just on the IRC channel. I like the idea very much. I'm kind of mulling it over a little bit, but that would actually vastly simplify life. So I guess like the key takeaway for folks here is there's a lot going on. We're, I think, relatively close to converging on a pretty simple and powerful set of APIs for NSM to the data plane and for NSM to NSM. And from there, I think, you know, developing gets to be much simpler and more straightforward. So it should be easier to get involved. But I think this is not remote mechanism specific. This is remote mechanism type. So to this point, it's just far better to set up the channel, just the overlay, right? How else would you configure it? Yeah. I mean, big plan needs all this stuff. Yes, yeah, I don't know, absolutely. What we're starting out is, when you look at the setting up of a cross-connect between some pod that wants to consume a network service and a network service endpoint somewhere, presuming that network service endpoint is not running as a pod on the same node, then you end up having to sort out the local mechanism. In other words, how do you inject the connection into the pod, which might be a kernel interface or might be MMI off. And then you also gotta sort out the remote mechanism. And that remote mechanism has to be negotiated with the network service manager that is managing the network service endpoint. And so you have to be able to express essentially as part of that, hey, this is the list of preferences that I have as NSM-1 for the kinds of things that would be acceptable to me as remote mechanisms. And then NSM-2 has to select one of those and get it back to you. And so this just becomes a very simple way in the API to communicate that back and forth. I mean, I like what you're showing and I get it totally. I was just wondering what was the comment before about NSM being obstructed and not detailing this stuff. Okay. Good, so good. Yeah, basically the NSM needs to do a selection process. And there are multiple levels where we could do the selection. First, we could do on the remote mechanism type. That's one level of the selection. And second, to look at the actual details for that specific remote mechanism selection and do some analysis. So I would, I mean, I think it would make sense to do the selection on the first level on the type of the mechanism, but leave the more detailed analysis to the data plane who actually implements it and have a way better position to parse them and to analyze them than to do it in the NSM code. Yes. Makes sense? I understand. But the low level driver, whatever it is mechanism that will be in charge of building the cross-connect needs configuration from somewhere. Yeah, yeah. No, no, absolutely. And so what might be useful is what I wanted to say. The first two examples is just simply source IP and destination IP. This is something that comes from a central knowledge, somewhere, I don't know. It doesn't actually come from a central IPM. But I think what you're really getting at and what we should probably strive to move forward with here is some sequence diagrams so people can see these things in context, because it's good to have the APIs defined. But I think sort of getting a sequence diagram of how the messages flow in context with the complete filling out of some of these fields would be massively helpful to make a lot of this clearer. Makes sense? Okay, cool. Cool. And all right, cool. Anything else before we move on to other items in the agenda? Because we're time keeps on ticking. I would strongly encourage people to get involved in some of these things. Like I said, there's a lot of activity on the IRC channel. We've had a lot of really useful feedback from a bunch of folks. The PR is out there for comment. The PRs are being run pretty hot, meaning that as they progress, they're getting updated precisely so there's a nice place for people to go read through and add comments. So this is an exciting time in the project. We would love to have more people involved in it. Cool. Awesome. So action item tracking. So Frederick, since you're actually here now, do you mind you're much better at this than I am? I'm happy to share the project board. Do you want to talk through it? The only problem is I don't have access to my computer right now. So I'm not going to be effective at that. I excuse excuses. Cool. So I think we probably need to go through and clean up somebody. So for example, if we look at the reviews, we've got the X Factor C in apps, which is definitely something folks would like to work on. But I think things like the migrate errors to go errors, I think that's been resolved. Is that correct, Frederick? Sorry, can you repeat it one more time? Migrate errors to go errors? Yeah, that's an ongoing thing, but the majority of that should be resolved. So we have go errors installed. We have the plugin built out. Part of it is that as we refactor the system, we need to go and change everything to use go errors. Primarily for injecting the stack traces. Yeah, so you might be worth mentioning some of why go errors is nice. Yeah, so basically what happens is that when you run go errors, instead of using the standard errors, which just gives you a string, you run go errors and you can inject additional information into the errors instead of labels. And so the stack traces are one such thing that we inject in. And so we end up, once it hits the logger, we can then serialize all the logger into whatever format we want. But ensure that we keep that structure. So we've set it up so that when you write to, I think it was Logress, then you'll have all that available, all the information and context available in your logging system. So you can then filter by them or perform whatever analysis you want. So it's just a little bit like you're saying rather than getting a potentially cryptic string with a line number attached to it, you get a potentially cryptic string with a line number attached to it and a stack trace. Correct. Awesome, cool. So we've got the ongoing becoming a Kubernetes working group member and that's been backburned a little bit lately. Do you remember, Frederick, what the workout documentation infrastructure stuff is? I don't at the moment. Okay. And the document had to get a privileged container? Well, we had a document of, well, I remember you wrote one on, no, you wrote the one on it to get the network namespace thing. To get a privileged container, I don't recall if that's been documented or not. Yeah, that should be easy to document, though. So it's just finding a correct spot in Kubernetes to tell it to give you a privileged container. And then we've got the software line and hosting to then a further part. I think that as part of the API is getting really crisp, that that's going to fall out to the right place because one of the things that we've started to do in the ArcDoc is to get really clear about what is network service mesh on the abstract and what things are particular to network service mesh. So for example, the NSM to NSM API, how to network service managers communicate with each other, that is not at all particular to Kubernetes, right? So Kubernetes isn't, shouldn't creep into that. But the network service clients to network service manager API within Kubernetes, that we can sort of look at in a much more sane way because we know that's always going to be a Kubernetes thing. If someone is using a network service manager in a different context, they'll have their own way for network service endpoints and network service clients to communicate with them in that context. Cool. So the enhanced proposal supports CNF, CNCF, CNF project. I think we're actually moving towards that, Michael. Does it sound like we're heading towards things that would be helpful and useful to you? Yeah, definitely. But again, it's a lot of the right now for me manual work that I guess could be abstracted to something a little easier to manage. That's the hope. Cool. And then we had a really good point here made by Dunehammer last week about separating this out somewhat by audience in terms of who we are addressing. And he started taking this flag at sort of what he saw as the audiences, you know, developers of NSM framework and APIs, developers of plugins, that consumers of those, et cetera. And I think that's a very good point. I think right now we're sort of very much in headstone mode, but as we document things, we have to keep that in mind. And then the L2 forwarding with VVP example, I think Tom continues to work on that, right Tom? Yeah, that's correct. And I'm trying to participate in this discussion of the data plane to NSM, NSMD to NSM protocol because you know, that's the key to this, to develop these data plane connections. Awesome. So I think that we've got some in progress things related to, sorry, OV. The guidelines for extending NSM from Dunehammer, I need to go find out why that's hanging out in review. The images have gone stale. Is there anything else that folks want to touch on while we're reviewing sort of the Kanban board? Cool. Anything else that folks want to talk about? We're sort of running off the agenda. We still have a little bit of time here at the end, but I'm inclined to yield it back if folks are good. All right. Thank you guys. Much appreciated. Talk to you next week. Yeah. Bye everybody. Take care.