 Oh, man, we're already starting talking about the next KubeCon. I'll have to give Chris and a check of hard time. Too soon, man. Too soon. Well, at least the CFP is open until sometime after the current KubeCon. But yeah, I think we definitely want to get some talk proposals in there as well on an assignment from the broader community. Nice. Great. So if anyone is new, please hop onto the Network Service Group meeting notes. There should be a link in the chat and add yourself to the attendees list. So I think we should go ahead and get started. So I don't know how long today is going to be. It'll probably be a relatively short agenda compared to normal, considering we're in this weird time between KubeCon and holiday breaks. So I'd like to spend most of the time talking about things that happened within KubeCon and then we'll continue on with main agendas on. So what's the one after the first? January 8th. Yeah, we have gone on record and we discussed this, I think, last meeting. We are on record as January 8th being the first meeting after the holidays. So now we don't drag people here on the first. Okay, so let's go ahead and get started. Agenda bashing. Is there anything anyone who would like to discuss that is not part of the meeting notes already? Yeah. Is there something that you want or like a particular topic? No, I mean, I have this topic that I have put as an issue already about splitting, but I guess that this is too deep dive and break holiday probably is not. We can do whatever is comfortable doing it, absolutely. And I think there are aspects of it that are sort of deep dive in terms of things like sort of circular dependencies between repos. But it probably doesn't hurt to sort of set the idea. Even though it may be a little premature to actually make a decision on it. Yeah. Yeah, we can get started on that. I agree with the sentiment that an actual decision we'll want to hold off until. Yeah, of course, of course, of course. So I think it's important though. I mean, if you can't make a network service that's not part of the repository and until we split, we won't find out. Then it's almost the whole point of the project. So we have to find out whether this works. To be fair, we all know, no matter how well intentioned you are, until you actually do go create a thing in another repo, you're never really sure. As a matter of fact, I actually tried doing this. That's why I started working on the SDK. Yeah, because I have my project with videos and things. And yeah, I tried to import it and then it was almost impossible. So I started working on this SDK with this exact idea so that you can decouple the examples and you should be able to do NSCs and NSCs completely independent from the repo. Yeah, those are extra points. So let's add it to the agenda. We can talk about some of the initial reasonings behind it. Okay, good. I think we're well into the people generally agree it's a good idea. Now we're quibbling about what it takes to get for me to be. Yeah. Okay, so events. So does someone have a readout from KubeCon? I think a lot of us can sort of make comments. We were all there. So we had, I think, something like 11 different things going on between various talks and demos and booths and whatnot. From my perspective, it went super, super well. I think we had something like 200 or 300 people in the intro talk and maybe about 50 less than that in the deep dive. Lots of good questions. And then we had both cases like a mob of maybe 20 folks or so for some time afterwards in the hallway who were asking questions and wanting to talk about it. Yeah, there was somebody in the audience vocally calling for documentation. Yeah, but we ignore that guy. So it's not a big deal. I think he had a totally valid point to be said. Yeah, but we just ignore him anyway. Yes. I had noticed that, yes. So among other things, we ended up doing that. So we ended up doing at the intro a demo that Frederick cobbled together where he live on stage in six minutes, right? So working CNF and it was a network service mesh, which is kind of awesome. And we were also able to, I demoed quite a few times in many things, Matthew and David and all the folks on the skydive team for all your help on all of this. I ended up demoing quite a few times using the skydive you know, the skydive integration showing folks the essentially the Sarah story where we can deploy the chain of your client to firewall to VPN gateway as the secure internet connectivity service. And that was also super well received. Other folks want to share impressions? I can say that this definitely made a lot of noise within VMware. So I had a lot of people contacting me and a lot of internal discussions going on. Also, the guy from Charter and Communications, he also came to me talking about how NSM could eventually benefit his use case. So that would be, let's say, the most things that were for me. And of course, thank you all for announcing this. This was really great, completely unexpected. So thank you for announcing my promotion to a commuter. Yeah, this was something we kind of surprised Nicolai with at the intro talk. We get to the talk we announced his promotion to NSM Committer. Further down here in the agenda, so congratulations again, Nicolai. And from what you said, it was well timed in terms of who was in the audience and so forth. Exactly. Yeah, so I was quite happy with having us make the announcement. So like you've done a fantastic job. So again, like personally, thank you as well. So you've done some amazing work. Thank you. So yeah, I've had a lot of people come up and ask me and I had a bit of a weird experience as well because Heather gave her talk. Heather is the person from the Linux Foundation Networking. And she decided to open it up and allow people to basically ask questions. And apparently, several people in that talk were very telco oriented. And there were a lot of questions that were popping up about network service mesh and potential integrate and so on. So the fact that people were like actively asking about it in these venues, I think is incredibly promising. So I think we've hit a really good spot. Yeah, or one thing, because you mentioned telco and at least up until kind of before KubeCon, my impression was that the main use case for NSM would be telco. And apparently, I mean, a lot of people from the Tenku business are interested in it. But from some of the discussions there, it actually appeared that a lot of the enterprise workloads could also benefit. And yeah, we had this conversation with the SIG networking guy that actually discussing, you know, running Kisto on top and different use cases there. So I think that this is also at least for me, some key takeaway that actually we shouldn't focus only on telco and try to gain some other interest from other people. Yeah, I completely completely agree. Yeah, well, you have to bear in mind the demonstration played strongly towards non telco use cases, because that's not how a telco would put it to use. So I think we kind of open people's minds about how this might be more generally useful. Also, KubeCon's not so much a telco event like you might find at the OpenStack Summit. So the audience that we had might well have had a lot of telco people in, but it certainly wasn't as exclusively telco as it might have been. Yeah, but I think one of the things we have to keep firmly in mind is we have really big markets, both in enterprise and the service provider. And one of the things that I think is probably going to be one of the better services that we can do for the SP industry is to have a single solution that is popular both for enterprise and SP, because I think SP has historically been very, very hurt for the fact that you had a bifurcated stack. You had this is the way you normally do networking and the way you do networking if you're trying to do an SP thing. And that's, I think, gone poorly. I mean, we do have some folks here from SP who can comment and if you'd like to, but I think that's just traditionally gone poorly. And so I actually think it's crucial to get one good solution that meets both sets of needs. Yeah, and I also suspect that as enterprise continues to mature, that many problems that you see in traditional telco or service providers, enterprises will start to hit as well. So even things like the VPN example with Sarah's story, I think is a really great example where when you started to deal with very large enterprises, then things like how do you, not only how do you connect to Sarah's system, but how do you even set up that VPN in the first place and connect to that remote system and set up the underlay configuration and so on. These are all things that become important in the enterprise side when you've hit a certain scale. And so I think we have a unique opportunity to be a bridge effectively or a unified solution that allows people to ultimately treat both of them the same. Yeah, I think that that's definitely part of the goal. The other thing comment I would make about the example with Sarah's story, the reason I typically lead with that example is because the networking people in the room immediately see the implications of it and they'll project it to more complicated cases. But that would not be true in the opposite direction. If I led with a more SP focused story, it would be run clear to the enterprise by the care. So we need an employee example. Oh yes, I understand. And I think we're going to be in this interesting place of, you know, both maturing and expanding our capabilities at the same time. Also, I came across a really, just for a next demo, I came across a really nifty tool. So are you, I'm not sure if people are familiar with Jupiter, but effectively you can think of it like people often use it with Python so that you can basically type markdown and then interspersed with code and run the code as all part of a web document. It turns out there's a Golang version or Golang integration that someone has done. So one option is we could do a Jupiter style document that describes what it is we're doing, run a snippet of code and effectively build up a network service and documenting it as we go as to what's going on. So it makes it much more readable and then you can then commit them to get to get how we'll render the final results properly as well. Awesome. So it might be a little interesting thing that we can do to help with documentation and demos and give people a scratch space where they can experiment. Sounds awesome. Let's see. Anything else from KubeCon as well? Did anyone else want to talk about? Anyone else who's there? I know we've got so many people on the call who are there, so very curious of their perspectives. Knowing what those things are so we can fix them next time. Okay. Well, if anything comes up, definitely bring it up. Okay, so. Sorry for that. I was trying to find my mute button. I think one thing Ian and I chatted over at Bia was in a napkin is how to make this work in public cloud. Well, there are many potentials of how to make this work in public cloud. What exactly do you have in mind? Not quite sure. It's what we use the data path. I think it's an issue. While it's tied, you can use VPP. It gets really complex trying to deploy that into public cloud. So it may not be that easy, but I think looking at it, I don't know if Ian, if you kept that sketch. I didn't keep the sketch because it was in Heather's notebook, I think. So technically speaking, it's still around, but I would say on a handful of things. Firstly, I have, with a certain degree of beating my head against the brick wall, run VPP in AWS and got it to. Running it in AWS is easy. Running it and getting it to actually eat an interface is annoying. Running it at all certainly involves giving the VM some startup tweaks, but it can be done. I think one of the things here, I was looking at the code the other day. I have no time to code, but I was looking at the code the other day and weeping over Frank's use of Fredericks' use of make files to write scripts. But that aside, if you look at what he's done with Terraform, then he's written some Terraform for one back-end provider. But potentially we could use that Terraform to work against other back-end providers. And if we basically ported it to AWS, that might be a potential way of making sure that if people want to use it for AWS rather than anything else, we could make that a development environment, which might be a little easier for people to consume than pack it and certainly a bit faster than pack it. And then we're building the idea that it works in AWS from the start rather than it being an afterthought. But in terms of how you would deploy it, then if we wanted to use, and you've heard my comment, well, Ed heard my comment for the people who've had my comments about how data planes should probably be normal network services at some point, but if you wanted to use an interface type that was basically a tap interface to a bridge, then to kind of connect data planes and whatever to the outside world, we could probably come up with a deployment model for that so that we've got some external connectivity from the VM to the outside world that we could actually build on. And again, Terraform could probably help us along with that because you can assign multiple IP addresses to AWS machines. And theoretically, much of a theory at this point, that means you can run VPP without letting it grab an interface at all, which means we could have a data plane running. Not how you run it in production, but it's how you can test it. We do actually have issues and I would love to see people work on getting this work in public cloud environments. And so we can definitely, if there are people interested in doing that, we would love to see the expansion of that direction. I know the machinery was intentionally written with the flexibility to let people pick other kinds of environments that they drop things into. We default to vagrant, but it supports other options and it's easy to add other options for the most part. Yeah. The reason I was talking on this is vagrant is a pain in the ass for me because vagrant likes virtual box and doesn't like a lot of other things and I don't run virtual box. So I was sitting there beating on it to try and get it to run against Libvert and then it was like, yeah, I can't do that with vagrant and so on and so forth. But yeah, again, the Terraform framework is closer to the marks of that. Terraform has a bunch of backend providers and AWS is certainly one of them. So that would be a nice option and maybe not a particularly difficult option to work with. So one thing I do want to point out is that I do know it works for VMware because I use that all the time. And then we have a patch out that Matthew pushed for getting it to work with Libvert. I think we're literally just kippets you back and forth about whether or not to lock into SSHFS. So we have people who have done that work on the vagrant stuff, but I'm well familiar with your frustrations. Yeah, and one of my points is that vagrants are nice development tool, but it's only evidence development tool, whereas Terraform is a production tool. So I'm not saying necessarily we go one way or the other, but I am saying that if we can find a way to make the Terraform code a little bit more general purpose and it's built very close to the mark already, then we should be able to make Terraform run against a bunch of backend providers. Yeah, I know I'm quite fine with that. Although, quite frankly, I'd also love to explore simply running against the Kubernetes provided in many of these public clouds, not necessarily even Terraforming VMs, although that's certainly also an option. Yeah, we can try both of these things. I mean, people run both ways in production, so it's not like, it's not so much we have to choose one way, it's that we can experiment a little bit and see what works for us. I don't know quite how EKS and friends work against work in terms of giving privilege away. And of course, they'll have very opinionated virtual machines with no huge pages, so there's a limit to how useful that might be. Yeah, as long as we're not grabbing physical interfaces, from the, can I show it working at all, huge pages is not pretty tiring. A huge page is actually kind of a pain in the ass, particularly if you're trying to use it with a DPDK interface. Yeah, and again, so my expectation here and I tried this once a while back and didn't have a great deal of joy, so I'd have to go and revisit it. But you can use a ToneTap interface as your DPDK interface, and that's probably the easiest way we could actually get this even loosely working within AWS, otherwise you end up using some very expensive virtual machine types in order to get VPP to run. So it's on my job list for other reasons anyway. This is actually what we do now. The thing we do now, which is great from a make sure it works all the time, although not great from a make sure it performs optimally all the time, is the the data plane literally is just running VxLand against this pod interface with AF packet. Yeah, and again, that works. There are reasons why I never got that to work in Amazon, because if you're using Amazon standard interface, then even putting it in a bridge domain tends to get the virtual machine upset. So we'd have to experiment a little bit. Cool. Anyway, let's see, I think we were also we were in the middle of events. Yeah. So, so events wise, we have a lot to talk on the lab, these provisioning stuff. So we'll come back to that later on. So, so we have Fozdem Brussels coming up. And that is in February two through three. And Nikolai has added a similar talk already. Nikolai, do you have you received the decision on whether he was accepted or not? No, not yet. I guess that's probably not. I mean, if it's not accepted by that time, I don't know if when was the deadline, but yeah, I think that we can scratch that safely. Cool. So we'll go and leave this up for the moment. Okay. We also have a call for papers for KubeCon EU. So please think about what type of things you would like to talk about. And they don't need to be network service mesh oriented. If you have a problem you're solving where network service mesh can help. Those are those are super useful because effectively getting showing where people are doing something interesting with it with it interspersed is a powerful way to show that it's getting traction. We also have Mobile World Congress coming up. And so if you have avenues to show off any demos or anything similar to that, definitely get in touch with me and Ed. And we'll work to make sure that you can put something compelling on. Are there any other events that anyone can think of that we should add to the agenda that we should pay attention to? There's got to be an O&S at some point, but I think it's April. But it's probably the next thing out beyond what you've already got. Yeah, O&S is going to be in San Jose, California. That's very convenient. Always look for all these exciting places to go and they announce San Jose and it's like, oh man. Well, it means you can go out drinking and get an Uber home. So there's always that. It's actually easier if I'm in the hotel because I can just go up the hotel room. I just wanted to ask about the KubeCon EU. Do you think that we should try to create some kind of overview? Like by May, we should have something like, I don't know, what's the status of the project where we are? I mean, we should have some something. Yeah, I think probably we should get some talks like that in. It would also be good to get talks in from folks who are looking to use network service mesh, you know, for production and sorts of USA on the call who are like that now. I think that would also be interesting. It would be good to get a breadth of proposals in from folks. As much fun as Frederick and I have doing our song and dance and we will probably do it again in KubeCon EU. It would be good to get a broader set of folks talking about it as well. We also need to bring it down to the concrete. If we can get things working in May that are less hypothetical and more useful, then that obviously is a big step in the right direction. Yeah, I will argue my six minute CNF is useful. I will argue it will be useful when you document it. Yeah, I think that you are correct with it as well. Having something that is more concrete that we can show off, maybe part of it as well as we can show off the six minute CNF but using the SDK and show off that we now have an SDK that people can use and that it is easy to import and get a CNF running. One of the things in fact that your demo shows but does not mention in very many words is you are taking a piece of code and modifying it to be a different piece of code. In some sense it is not writing it from scratch but basically taking something that already exists and repurposing it. Now you are not going to write something from scratch even the most elegant SDK in six minutes but on the other hand that would be one thing. To give some a feel that if they basically started a new repository and wanted to write themselves a network service then they could do in fairly short order. But more useful I think because you have to remember the audiences we appeal to are not all the same is it is well and good to appeal to programmers and say look I can do this so incredibly quickly and that has got uses for their managers as well. But what is most useful for the people with checkbooks is not watching it being written but seeing somebody spin something up and do something useful again. The use case is more important than the coding side of things because then people get paid to work on this. Otherwise they just want to work on this and nobody will give them the time. One thing I do want to point out I think that would also be a big wow factor at KubeKanayu. We would be showing this deployed across multiple public cloud providers and working because I think that really makes it super, super real and in fact what we may want to do is actually tell a little bit of an ESM story. If we could do a demo where we deployed network service mesh to the various public cloud providers Kubernetes and then we showed the multi-cloud story of being able to use ESMs to consume a network service from one and another, I think that ends up being kind of super compelling. Even if we can't use the ESM element of things, even if we could at least deal with individually created GRE tunnels with a GRE service of some variety, it would be better than nothing. But the point is you're demonstrating something concrete which we can't do today which is low-level networking out of a Kubernetes cluster which has never been a possibility. Yeah, agreed. So lots of good ideas there, it sounds like, and so it's totally easy to get that hold together and chart it out. But it's one of those things January 18 feels far out right now because it's a month, so back on January it will be right here in our face. It feels too close to me. So folks should think about what they want to do and let's try and get a broad array of POP proposals and I think it would be good. No, I'll repeat my previous sentence too soon. So we also have, okay, so we've moving on to the main agenda. So we've already announced Nikolai in case you missed it, Nikolai is now a Committer on Network Service Mesh. So he now has the, we'll have to make sure that you have access to everything that's added title. So if you're missing anything, let me know and I'll do my best to get you whatever you need. It works from what I can tell. I tried both repos. And do you have a packet account as well? Have we set you up with that? No. Okay, I'll make sure you get added to that. Excuse me, I just wanted to discuss a bit about KubeCon and I wanted to know if there was some talk about moving from VNF to CNF because this is an important part for the telco environment. Ah, there was a talk that was given by Dan Kohn who spoke about VNF to CNF, which reminds me, I need to have a conversation with him because he posted, effectively, he posted Multis versus Network Service Mesh and I need to have a conversation with him on that. It's some general discussion about it, but is there anything concrete about moving from legacy VNF to CNF? Nothing concrete yet. What would you want to see? Legacy fire rolling, for instance. I know that we have some vendors who are thinking about moving to the CNF world, but they are saying that their VNF is small enough to run and not to be moved to the container world, things like this. I don't know if there are some concrete workloads that we can use to demonstrate the move to the CNF world. They actually did that for, this is something that Dan talked about and also some of the folks from the Vulp team who are also participants here, they actually did that work and did demonstration and did performance measurements and so they showed that can be done. Generally, speaking, when you look at moving a VNF to a CNF, there are two big impediments that you run into generally speaking. The first one is VNF's data plane has to be a pure user space data plane and most people who have written VNFs have hacked the ever-loving hell out of the kernel and so that's the first impediment. The way the Vulp guys got around this was they used the VGP data plane both for the VNFs and the CNFs which made the lift and shift basically trivial. And then the second thing that you have to solve is what I sometimes call the wiring problem, which is how do you chain together and compose together CNFs and Kubernetes because Kubernetes networking gives you exactly one interface typically and that's the major problem that network service mesh is currently trying to solve here. There was definitely stuff that was done there. I think there's also going to be a splash at Mobile World Congress and VNF CNF migration but the net net is I expect the number one problem most vendors are going to have is if they have built a data plane by hacking up a kernel they're going to have to go and take something like VPP off the shelf and or build something like VPP as a pure user space data plane in order to make the leap. I think you're well there's two things from that. One is that there aren't very many VNFs out there with a hacked up data plane at this point. They're nearly all running DPK of one variety another because you simply you wouldn't be able to sell something that runs at the level of performance that can allows you to run for the VNF use cases that exist. But you say those are the major problems but I think you're focusing right at the bottom of the stack there's another major problem that nobody's explained which is Etsy doesn't apply to Kubernetes at all so there's no no one knows how orchestration is going to work. No that's also it's also a problem but the other problem that I think you went into at DPDK is DPDK tends to make a bunch of presumptions about the new structure that is super super hard to actually get to work properly in a cloud-native environment Yeah but I agree and well we had this conversation yesterday between ourselves but that's the thing we can fix a part of the problem but it doesn't solve the problem with NSM we've got to work out what we have to do to tools that people actually use and that means DPDK practically speaking so that if I wrote something with DPDK I don't have to give the container running it the the piece to the kingdom because if you're trying to run multiple vns and that's the plan here otherwise you wouldn't be doing this within a single host and if you're running DPDK and every single one of them needs a fully privileged container or even a partially privileged container then this isn't going to fly in production you wouldn't trust anything and your vendors wouldn't support it so that that's another element to this we need to accept that we have to get the DPDK people or we have to help DPDK people change the way they think so what I got from Marian is that MIF PMD is going to be operating soon in DPDK I mean we should be able to run just by setting it up if that were the only thing that would be super wonderful there's a long list of things where DPDK makes presumptions about the perfection of the world or your complete ownership or mutability of the the environment that tend to be false and unachievable in kubernetes he says a lot more forgiving about many of these things we're in it without DPDK although if you use the DPDK plugin to access a physical nick you're stuck with all those presumptions there's active conversations going on in the DPDK community about how to start some of these things out but I guess the the the fundamental thing and it's always gets super tricky is when you're underlying presumption for a long time has been that the infrastructure can be molded to your will and you get deployed to cloud native where the presumption is that the infrastructure is more or less immutable or if the very best small number of knobs you can turn then it's going to be a culture shock so how about if we if we put together we don't need to run kubernetes to give them the working environment that would give them some idea of the problem so if we basically give them clues enough to run a docker container with the kind of interfaces we're looking for and no privilege whatsoever then they can go and test this without our assistance that's actually false in and I can tell you that from the experience from the volt guys in the NFC enough because they went through and did that exercise in docker and when they went to kubernetes they discovered there were still a shit top presumptions that were made that you could make effectively in a docker environment that they had been making that you could not make it a kubernetes environment about the mutability of the infrastructure and so they ran into my point about docker environments is not what docker does by default but with the right options it's basically run a container the way kubernetes would run a container because and we know this to be true because kubernetes runs on docker so it's not a question that this is not how docker works by default absolutely it isn't how docker works by default that's totally fine it's a question of understanding and we're in a better place to do that I think how to make a simulated environment that's roughly how dpk will see the world from an nsm perspective can i interject a minute is the easiest way to actually get that right is to do it in kubernetes that is a hundred yes it is but that will take longer so actually it won't it won't and here's why because i guarantee it will take longer to tweak out and stay tweaked out with a docker environment that has the particular set of constraints on it that kubernetes imposes and to keep up with those constraints is much more work will take much more time than just it's it's not a question of keeping up with this it's a question of what could we do in the second week of January versus how long it will take us to have that working in real life again I can tell you with the voice of experience having worked closely with the whole guys dude that you've got the amount of work and the amount of time it's precisely flipped could ask the question sure has Emily looked at afxdp because i mean i've seen some intel papers where they're getting almost the same performance as dpk without any of the baggage yeah no i'm super excited about afxdp because it does look like it has the potential to crisply solve this problem um and and so the the two steps there and i'm actually talking to people about this would be how are we doing with this landing kernels and production systems and right now i understand it it's an alpha on coro s i don't know what the story is with the buntu and i don't know what i'm allowed to say about the story with with red hat um and then the other thing is getting vpp support for afxdp and i'm talking to the phyto community about that as well because that would super simplify a ton of stuff if we could we might raise the same question of privilege again because you presumably need some power to actually go sort of using those interfaces in that way yes you do yeah it's the same as we don't need we we don't need sorry uh i was just going to say we don't need um we don't need vpp to be running here we just need pmd to be running here that were the whatever the lcpmd test is in dpk is enough to prove this is going to fly i'm pretty sure i'm pretty sure by the way that um we can centralize privilege in this regard because what afxdp really is doing for us is that is mapping it is mapping some set of packets from a physical interface into user space and so i suspect we could pull a similar file descriptor passing game to what we do with memaiath um and simply have an unprivileged thing say here's my chunk of memory and have a pretty single privilege thing on the system say great i'm going to ask for this set of stuff to be mapped into your to and from your memory i suspect we can do something like that and that would i think solve your privilege again that may well be true um but we don't know we can do it until we've seen it working okay cool all right so let's see it's back onto the main agenda so we have um we have an issue on managing issues in prs so um this is definitely so so i nick like i'll let you state the the problem that you're thinking of yeah so um i definitely believe that it would be great if we can have some dedicated call where we can just go up on the issue list and probably some pending long-standing prs if such exist but most importantly for issues and try to figure out where we want to move them maybe schedule some resources or decide if this is more important than the other and point people to to that and along those lines so um i think that currently it's a bit of um around domich if i may use that word um it's an interesting idea i'm my only my only constraint would be we want to make sure we do this is a public call um of course we remain open but it's an interesting notion i guess the other question would be is there a reason that we can't do that in this meeting um because i i'm absolutely in favor of getting this done and it's possible we need a new a different meeting for it but but i'm also um generally speaking try and keep the shell of meeting smaller you know and if we needed a second call we need a second call that that that may be true but i'd rather i think try and do that here um first and then if it works here that's great and if it doesn't work here then then we can look at a second call does that make sense to folks i mean uh to me it sounds like this call is already overloaded you you see what kind of discussion we just get a lot of kind of very different things yeah that may be true that may be true um like i said i'm open to the possibility um so i i i guess take my position as being generally supported of doing bug scrubs and then we can work out what the best mechanism is and then go for that yeah sorry go on go on well i was just gonna say that the main thing that we're gonna want to do there is nobody will be able to find the woods the trees if we don't do something about highlighting the the top 10 issues or something like that in there um because you know the number of issues will increase and increase i guarantee for the time being we'll be creating more than we get rid of so um the trick is going to be saying this is the top of the pile the rest of them we'll pretend not to notice at the time being and and that sounds like a commit a responsibility to me yeah there's uh there's another part to that as well which is trying to on-board newcomers so i'll give you an example back from my early time over at the docker project the we would get new people in and they would look at something and it would look like it was trivial to do but there might be mitigating factors that make it very difficult or vice versa there might be something looks very difficult to do and having someone or a group of people who can help groom these type of bugs and get them to a point where they don't necessarily go off and implement them but they've done 70 or 80 percent of the design work on on how to get there allows people to just pick it up and run with it and then they need less hand-holding over the long term and become much more effective committers so for me this is uh not just about trying to prioritize which ones but also making it easy to on-board people who want to actually contribute to the to the codebase um yeah so so let's let's go ahead and bring this up again on the on the next uh on the next week and we can try doing a bug scrub or rather this we can try doing some grooming before beforehand and then bring it up into the into the main call so i think i think there's some work that we have to do before we actually land into the uh landed everically into the call so do you want to work with me on that so we can work out what uh uh what type of things we want to to talk about um is it me yeah uh yes yeah yes yes cool well anyone who's on here could to basically find their three favorite issues be them either important or low hanging fruit and and improve the quality of them and then next call will be easier because there'll be some information to work from otherwise we're basically you know the thing i've seen elsewhere is uh people are reading the bug list the issues list the first time in the call which involves a lot of pausing and thinking and not a lot of time getting things done okay yeah that's a good idea as well um yeah i mean also like feedback is always welcome on how to write better issues i i tend to put a fair bit of effort into writing issues and i think sometimes it's helpful if it's not yeah i think a large part of it is just going to at this point is going to be taking some of the uh some of the issues as well like i you've done a fantastic you seriously upped your game on on writing issues so uh it which i think it's it's helped uh we we need to do the same on our side as well um let's see the next topic is going to be kubecon cnf comparison but i don't think we have either uh michael or watson on the call or do we i'm guessing that we don't i don't see them here now and we don't think we have taylor either yeah so they have um they have a cncf comparison that's going to be done for kubecon at eu is my understanding so we need to make sure that we jump straight back into that and help them with the next stage of their uh of their comparisons so that we get good numbers and so splitting the examples from the main repo uh i think we had some discussion on that earlier do we want to to talk a little bit more about this or do you think we've spoken enough on this neckline let me just try to to to spend some sentence on it and try to explain my idea so um i have tried to written this to to write this in the in the issue but probably was not not not really clear so the the my idea is that um i would prefer to see nsm core being the nsmd everything from the control plane maybe the data okay the data plane also being a separate repo where you that you can develop and test without depending on examples or uh vagrant or terraform or whatever is needed there if you see what i mean this of course presumes that we have a good testing like uh unit testing infrastructure um which is not in place today understand but maybe free outline the problem we can start working on it um and uh the idea is that um i think that this should be just the source of producing the docker files and nothing more than that so if you want to to implement an nsc you just um in your environment you just just say that that this these are the network service images that you want with this and that version or the latest one then you import package from the repo if that's needed for the SDK for example and you start developing without having to build your nsmd locally etc etc of course this this will be far future i'm not saying that this is something that we can do today or in in in a month but maybe if we said this is a kind of a common understanding in a target and start working towards it maybe we can end up there eventually by i don't cube com does this make sense or still a little bit fuzzy yeah the the idea the idea makes sense uh my i think we have to think through uh there are some some significant benefits uh we also have to uh work out what the uh cost is going to be as well because once we split it up into multiple repos then it's going to like how do we integrate all the repos in uh the ci for an example so we have to make sure that we can trigger a build and all downstream projects simultaneously whenever whenever we have a build occur so we'll have to work out some of the um some of we'll have to work out some of the paths along along that as well and i i do think that we we're going to have to eventually split off some of the stuff around uh like perhaps the data plane should should eventually live in its own in in its own repo with with bpp as an example or perhaps the the sdk should be a standalone project and so i think we have some things to think about in in in that side that would be easier to to start with and we can work out what the implications are from the ci over over time as we go so for for me that the biggest thing is going to be is going to be the ci and and test and the amount of uh of cognitive load on the developers so if we can find a way to constrain those then i think we can uh we we can get a lot of benefit out of it cool well yeah that's um let's go ahead and explore that a little bit and our next meeting is on the on the 8th so let's let's bring it up to the to the main group on in in that time as a main topic okay we'll do okay and then uh we'll just skip the next one ah sorry um who are you no uh design docs and how to program uh and how to program the docs and code comments and so right let's just start just start from the beginning um much as ed would like to tell you different because he loves writing slides a document here is a document when it's committed because then if the document differs from the code we have a bug and we can file a bug and we can fix a bug so that's what i'm talking about there um specifically things that are in the repository that tell us how to how to do things that ultimately we can you know newcomers will prove out for us that that they indeed are true um and i'm saying speaking as someone again who was trying to use the vagrant stuff it was i filed a bug on that the other day so so it does actually work in practice um the uh the other thing here is this doesn't have to be a um let's down tools and write or you know it's not going to be a let's down tools and write write all the documents before we move on um it's more a question of um for the time being anyone who's doing a pull request needs to look at the code and say does this code explain itself will it make sense to a newcomer will it make sense to me in three months um is it current with the documentation or does the documentation need to change accordingly so for the time being um the responsibility is don't accept code even from from ed um if you find that you can't make sense of it or that it's not actually documented somewhere if you do that then the documentation won't necessarily appear overnight but it will continually get better the same way as the code does so um those are really the requests for the time being um the other conversation i was having with uh frederick is uh at the at kubecon is a slightly more abstract one which is um if we are if we have grand schemes in mind of how the world will work and doesn't work today and that's what we're aiming for then again writing documentation to explain that um and then putting it in the repository even if it's not true even if it's just the way you want the world to be um if it goes in the repository and it's committed then we can at least say this is what we all accept to be the truth this is how we're all working the aim we're all working for so those are the three things i would suggest if i may add um i would because okay i have the rights now to approve uh commits i would i would like to see that we all start having a more let's say distinguished merge request because today it's a little bit like i find this work so i put it in my pr no no matter that doesn't really um it's not really yeah related so my kind of like a more structured poor request with which are focusing on a single thing which have some decent explanation about what it really is about so yeah well yeah so and here speaking from from a previous job which involved generally speaking not killing people um which mostly i succeeded out so does that then um it's um you have to remember when you're reviewing codes that there are reasons why you would accept it and reasons why you would not um and to be fair it's a learning experience here on reviewing um what what's acceptable and what isn't but um you might want to think from that regard what's good practice what what would be an absolute requirement or you won't take the code on um because otherwise you just you know there's no threshold here people just accept code because someone worked really hard to do it regardless of the fact that it makes everybody else's life harder in the future documentation is one thing it's not the only thing but it's one thing you should be thinking about and as you say separating separating things out so that everybody can see that one change does one thing and it's obviously doing the right thing otherwise two changes you mix them up and you one of them gets lost in the other you don't know whether it's complete and you can't really make it out yeah so i i think we'll find a good balance though because it's um it's a balance between how much red tape do we want to put on and at the same time making sure that we can that we keep these complexity under under control as well so i think uh i i think that you're we're thinking of this in the right direction and i think part of it as well is uh we have this gets down to another core thing that we have to focus on which is we've done a lot of work to get the demos up in up and running and to show and to show off the ideas so think of it like a proof of concept like we've proved that we are able to take us in this direction we have to start focusing on the quality side of it and start working out uh bugs documentation design issues and so on and to start really solidifying it because people people will it will excite people with what we've shown but the only way we're going to get into production is that it's a high quality project that solves a the solves a real need and emphasis on the high quality because they'll people will wait for something else to come along to solve the problem if we're not if we're not up to that quality bar that we need to be yeah i would also focus on you know the fact that the more people join in the faster this will get done which means that if we make it hard for people to join in we lose that benefit but i would look at this the same way as if you put you know like linting or code checks in any new open source project people curse it for weeks and weeks and weeks and then magically it basically becomes second nature and they just don't write code that fails linting checks so um the same thing goes here they will curse you for weeks because you're insisting that you document things properly and then they will just do it automatically if you don't sort of start down the path of of setting that as a standard then then it will never be a standard cool well with that we're already a couple minutes over so i'm going to have to i'm going to have to cut the conversation short and uh yeah let's let's make sure this ends up on the agenda on the eighth so that we can talk about it in more in more detail and let's work out exactly what type of what type of bars we want to set and and the overall the overall direction so with that uh are there any last-minute things or announcements that anyone has or are we good to go all right well we will see you on january eighth at the same time thank you very thank you everyone for joining in and see you then happy holidays all the big release bye happy holidays bye