 Okay, really quick. So let's go ahead and get going. Folks, keep adding yourselves to the meeting minutes as participants, apologies for being a little flaky this morning on the agenda. Let me go ahead and share the agenda so we can talk to one unified thing. Okay, so always first up is agenda bashing. So quick review of where we're going. We'll review action items. We've got some of those that carried forward from last week. Review of development activity with Frederick and Kyle. Review of use case mapping. I think, John, you're leading this this week because Prem is actively flying right now. Sorry. Yep, probably not much, but we'll do it quickly. Cool, cool. Then we've still got the open thing about meeting time planning and then action plans for the coming week. And then with any remaining time, there's conceptual review stuff we can do. So, awesome. Anyone else want to add something to the agenda that's not already there? I'm gonna volunteer Kyle to talk about CRDs since there was a lot of work done on that. Okay, so that's under code activity. Yep. Yeah, there was a ton of stuff happening. A lot of it very cool. So anything else folks would like to add? Awesome, cool. So, let's go ahead and dive right in then. So from action items that we had from last week on code activity, Frederick, have you thought anymore on the in cluster off stuff? Yeah, so there were two things I ended up looking at. The first one was, so I ran a few tests and in order to work out the in cluster off. So, since we're writing and go, Kubernetes provides a very nice way to grab a configuration. So basically you can call in cluster config method in the go client and that configuration has everything you need in order to access the API. The second thing that we need to do from there is the default service account that it gives us has almost no privileges. You can do version and that's about it. So what I added into it was, so I created a new service account and I gave it access to a limited set of APIs that I enumerated and that worked out well. So I think what we'll need to do is we'll probably need to create at least one, maybe two service accounts with different, depending on the roles that we want, which will monitor the new pods that are being created, new nodes that are being created. We have to work out if we want to modify any of this. For example, do we need the ability to create new pod or do we need the ability to create to add containers or so on? So if we do then we can add these privileges to the network service manager pods. So that was pretty easy. The second thing that we can add in is we can actually add through the pod spec capabilities. So for example, one capability that we will almost certainly need for certain STNs is the net admin capability. So we can actually inject. So all these capabilities are dropped by the container runtime by default. And we can opt to not drop things like cap net admin or so on. And that gives us the ability to manipulate the network interfaces and so on. Of course, we'll still have to, we'll still recommend that users test and make sure that things work because there's other things besides capabilities that may block a user from being able to work. So for example, if SE Linux is active, it's possible that it can be configured to deny despite the fact that you have a net admin account access. Same thing with the Ubuntu side as well. They have their own SE Linux. It will say it's equivalent in terms of functionality that can block these type of requests. And actually, if a user wants to go all out, they can actually fine tune the request. So it only allows certain types of requests that match the network service manager use case and block others as well. So there's a way they can do that. But generally, we wanna make sure that they have, the test that I ran did not have SE Linux block them, but it's something to keep in mind if you see things fail. So anyways, so the two things are service accounts, add a service account, bind a pod to the service account on creation and the ability to retain capabilities through the pod spec, which we'll likely need for adding things like adding interfaces. Awesome, cool. This is good. I haven't added these to a document yet. So I'm actually gonna see about enabling the network service manager wiki and adding some of this information in because it doesn't really feel like it belongs in a repository, but it's still information that we need. And so if we enable the wiki, assuming it hasn't been enabled yet, then we can document all of this kind of stuff inside of it for people to reference. And another option is we could document on in some document directory as well. So that way it lives along with the GitHub repo, but I mean, for this kind of information, it's true regardless as to the state of the container system itself. So I think the wiki is a good approach here. Awesome, cool. Thank you. It's something you've dug a lot into that. And I see we were getting some talks about not being a privileged container having well-defined capabilities. I think it's generally good practice to be able to do precisely what you need and no more. Yeah, I have a little bit of experience regarding this. I did it for some other project also, and I was playing with the same thing which you just described in the last few minutes. So last night I was playing with the service accounts for the network service mesh. So I created one comment and I pasted the link in the chat just to give a little bit of perspective what you were talking about right now. So folks can go over and see. I'll try and create a poll request later today with this. Thank you so much. I appreciate it. Excellent, cool. So then next up we had Kyle talking about CRDs. Yeah, right. So I think we reviewed this on the last call. So CRDs are custom resource definitions and it's kind of the way that we've decided to expose. Essentially it allows us a way to use the standard Kubernetes, they'll essentially act as a database for us, they'll set up, we'll be able to use kubectl and everything like that for all of our resources essentially. So over the last couple of weeks, I was able to figure out a way to essentially take our protobuf file. And from that, obviously we can generate go code that has a bunch of structures inside of it. Essentially then I was able to create a types.go file that references those structures as the spec and then use all of the Kubernetes code generation tools to generate everything we need kind of stitch it all together. And so Frederick and I spent a bunch of time last week reviewing that and he merged that this week as well. Right now there is one problem with what was merged that I'm still looking into and that is issue number 59. So deletion doesn't quite work as expected right now. So I won't bore everyone with the details on that but if you wanna go look at that issue you can take a look at that as well. I also opened 58 and 57, those aren't really issues but more things that Frederick and I during the review were like, yeah, we should look into that as well. Those aren't really bugs. So basically what's in there should work now. I also opened and Pratik, this is what I referenced in the issue that you opened as well. I opened PR60 which actually includes a bunch of sample configuration and scripts and updates the file that kind of documents how you might wanna try this out with Minikube as well. So that was pushed out as well. And then I do have PR61 open which fixes a few bugs just in our in-former factory usage as well. So I know that was a lot, so. Awesome. And another thing last night while trying, I figured out was there are no Docker images in the Docker Hub repo. So I had to create my own image and try it out. So it would be great if we can push those images. So Pratik, I feel terrible if you had just come in after 60 merged, everything that you hit was all the things that are incidental. I feel terrible that you, or maybe actually you should definitely review 60 and tell me if there's other things that you missed because you feel some time. Yeah, I expected it because there are a few things with the project is very new. So I was able to build it easily. So that's not a problem. So then I put it into my repo and then from my repo I put it. So that worked. Well, that would be great. And definitely eyeball 60 as someone who recently tried to use this, your feedback on there would be excellent. Sure, thanks. Cool. Awesome. Very cool. So anything else on the CRD front then Kyle? Not really. And I'm looking, I don't see Chris Metz on the call, but Chris and I talked yesterday. I know he was working on kind of a day in the life of a packet thing, but he also much like critique was interested in kind of getting this up and running and he wanted to try it with mini cube. So I was, I wasn't sure if he'd have time to take a look at it because our meeting that he and I, we kind of spoke later in the day, but I thought he might have had some feedback on number 60 as well as another person who went and tried to get this all up and running, at least with what we have now. Okay, cool. Yeah, one other thing I actually did notice from the CRD is, and this is probably a worth discussion because I don't really know what the right answer is here. So in all the presentations I've been giving, I've been talking about a network service potentially exposing multiple channels. So we have service definition, it can have multiple channels, each of them with their own name and payload. Now, this was done basically 100% because I was sort of imitating as closely as possible services from Kubernetes. I noticed that in the stuff that you currently have in there, you only have essentially a service equals one channel. Now honestly, in the examples I've tried to work with this, damned if I've actually found a good example for multiple channels, right? So I'm in no way complaining. I'm simply saying at some point we should probably talk about whether or not we can think of cases where we would need more than one service per channel. Yeah, that's a good point. And this is actually another area where I wish that Chris was here because he has some, I think he has some feedback on that as well. Yeah, the only one that comes immediately to mind is, so let's say for example that I have a network service and I expose a channel that handles a payload IPv4 and a channel that handles payload IPv6. That's the one example that comes to mind, right? It's the same logical service, but you're getting L3 payloads in both cases, but you handle v6 and v4 often quite differently. Yeah, and I agree with that. My only concern is, and obviously it seems like we might wanna solve some of these simple cases at first, but I guess the thing would be, it doesn't seem so far like it's a lot of work to carry multiple channels at this point from a code perspective. So as long as that maintains, maybe we should leave that flexibility open for down the road, rather than closing it down now and then having to later change the model and everything. One other thing we may wanna do is we may actually wanna loop in SIG networking for a little bit of wisdom here, because my understanding is that historically, services did not start out as multi-port things. And so they went multi-port for a reason and it would be good for us to have a clearer understanding of why and what their feeling is about whether that turned out to be a good decision. Makes sense. Because on the one hand, I like making things as simple as possible for developers. On the other, this is not a huge overhead and it does introduce additional flexibility. So I'm really curious other people's thoughts. I'd probably go with Kyle. I mean, leaving it in is easy of taking it out. And as you stumble across use cases, rub up against. Right. I think, I mean, Ed does have a valid point, but I guess before we remove it, I think we should just talk with, talking with the SIG network folks to see if they have some use cases as well or just kind of circling back to really ensure. Well, I mean, it was sort of what their experience is and how they feel about their own decision. I mean, I'm sure we've all been there where somebody is sort of copying something that you were responsible for. And you're like, don't do that. That was a bad idea. I can't get out of it yet anymore because it's already set in stone. But if I had to do it again, I'd never do that, right? We've all been there. So probably. The thing I thought about last night, we need to think about a little bit is how we integrate with Service Mesh. Because we open up new channels. I mean, Service Mesh is trying to have a data plane and a control plane. The Service Mesh data plane is on the Kubernetes network. We're opening channels in a different network. And I want to apply security or policy from Istio. There's now this invisible network running around. That is actually something we should think about as well. And I think you actually ended up thinking about it. I think you brought up two things. One is you're put about the invisible network piece. The other one is that we may have some use cases where the thing you're working via is not the Kubernetes network, it's some other network. So imagine for a moment that I've got a box that has a physical mic that's connected to some magic network like my radio network. And I want to reach a network service that is only reachable via the radio network. That may be something we have to think about as well. I would say definitely the default should be via the Kubernetes network. But there will be cases where that's not going to be quite the thing. So we definitely need to think through some of those. Would you be willing to put together a crisp statement of those problems for either the mailing list or the meeting next week, John? You see how that works, John? Yeah. Now you see the oppression inherent to the system. This is a very good motivation for people to shut up in your meetings. It's amazing how little is agreed when I get into the system. Sure, give me an AI. Okay. Awesome. I shall stop having these thoughts in the middle of the night again. Yeah, I wish you better luck with that than I have. Awesome, cool. Anything else on developer activity? Cool, we've had a pretty full week. Awesome. Shall we move on to review of the use case mapping stuff? We can, there's probably not much there. Prem said he did something on the distributed bridge, but I don't see, I mean, I see one diagram he's added. Just one, keep going, keep going. There, there. I'm not sure if it was different for one last week, but there's not much more. I think he's traveling, so. Yeah, I know he's actively traveling. He's gonna be out for, he's gonna be on vacation for the next few weeks, traveling to India. Although he's, first, he's gonna still show up to the calls when he's in the air. And so, you know, my expectation was he was frantically trying to get everything squared away so he could go. So, I mean, just in general for these cases, it'd be useful if people would comment about things that are missing or not clear. I would give an AI to everybody and just, you know, add some annotations and document. And I'm sure Prem and I and everybody else can, you know, help add more clarity. Yep. That's a good idea. I think getting some feedback could be, you know, and I have, you know, minimal ego about people, you know, criticizing. Yeah. Yeah, I tend to agree with you. When someone is criticizing things, for me at least personally, it means either A, I need to rethink them or B, I'm expressing them poorly and I need to improve that expression of the idea. Yep. Correct. So, cool. Awesome. So, we're moving right along. The meeting time planning stuff. So, there was an AI last week that Prem was supposed to send out a new poll or Google form for this. I don't think that actually happened. Do we have Mike on the call? Cause I think the other one, the other sort of thing that came up was, if he had any concrete people who were actually having the problem with this being on Friday. I don't think we heard back on that and I don't see him on the call. Does anyone else want to speak to this or should we move it forward next week again? Yeah. I think we should give this one more week and then just remove it and keep this slide personally. Totally fair. Totally fair. Cool. Now for the really fun part. Action planning for next week. So, in action planning for next week, code activity, what are folks looking to do for next week? Well, I'm going to still try to, I'm going to be trying to tackle 59, I think. Okay. The other thing that I'm going to do that's tangentially related to this that Frederick and I talked about, I'm going to come up with kind of a, almost a refreshed example of how to do CRDs as well. Frederick and I were going to kind of co-write a blog post on this as well. Yes, some context behind that. There was a bunch of blog posts that half implement CRDs and we weren't able to find any that show in full how to make them work, which tells us that the people who wrote the post just see the things before they work. So we thought it would be a good area to write them out. Real quick, what is issue 59? So I can get the poll. Issue 59 is the deletion problem with the current CRD code. Yeah. Okay. Cool. So here's a half blog post from, I think reddit folks about CRDs, how to generate the code of the CRDs. There's that one. There's like a, there's a half one there. There's another gentleman that last year had an example of repository and took it most of the way, but yeah, it'd be nice to get like one concise here it is fully, you know. Yeah. Yeah. Including writing the controller side using the informers and actually implementing some business logic and stuff like that. Yeah. That part is a little confusing for the first timers, the informers listed. The first time I tried, it took me like a while just to understand those lists, informers and everything. That part is especially tricky. Yep. That sounds like a very useful thing then. Cool. Anything else Kyle before I move on to Pastry and Frederick? That's, I think that's about it to me. And I also, I mean, for anyone else pushing pull requests, I'll dedicate some time for reviewing things next week as well. Excellent. Many thanks. Cool. Frederick, any ambitions for next week? Well, I was going to also take a look at what example, like we need to work out sort of like a hello world story that others can use for the network service mesh. And so, and see if we can start working towards that. So, I mean, it could be simple if something like, chain these multiple things together and have one of them respond or echo or so on. But, yeah, I think we, from a coding perspective, I don't know what that means just yet. So I need to work on that. I'm suddenly struck with the evil desire to put together the ping network service. It responds. Yeah, well, that's what I was thinking is it needs to be something that we can get through in both directions. And so even just working out what do we want to demonstrate as a starting point. And eventually it may turn into a boilerplate for people who want to build out their first chain so they can learn about it. Yeah, one thing that comes immediately to mind that shouldn't be so hard, I think, would just be doing a quick tunnel. So I have something that talks VXLan and it needs to talk to something that talks GRE. What do I do? Maybe just sort of brainstorming a little bit here. Simple transforms like that. So. Yeah, and the other thing that I want to do that's aiming towards this as well is to start loading these things up as services, as demon sets and so on. So, and to get some Kubernetes config files to check into the repo so people can start applying that. So that's a whole multiple set of tasks which range from creating the ammo files to creating the images. We have to work out how do we want to get the images onto some repository somewhere and which means we have to build images somewhere. So I think that we need to start working that particular path out so that we can have a deployment story as well. And this will help as well with the long-term goal of getting integration tests in place for network service mesh. So like the use case that we just talked about earlier, whatever use case that ends up being, we'll be able to take that particular use case and turn it into an integration test that tests it from end to end. And so I'm thinking that we need to start working in that particular direction just to act as an anchor and simultaneously validate our design choices. Like while we're working through it, we may find problems with the design that we may need to tackle. And I think that the best way to work these things out is to actually come up with a concrete example and work towards it. Well, the next thing about doing it with the Hello World case is effectively then we can turn use cases into integration tests. And I like that a lot. Yeah, it would be so along these lines, if you all wanna take a look at PR 60, that was kind of my intent was that moves us kind of into that direction a bit because from just the CRD perspective, that has enough logic to almost be like a super simple integration test of, hey, we've got it up and running, network service mesh is running as a demon set, we can go and create the CRDs, then we can actually create CRD objects and verify they're all still there and they came up. Perfect. So I would echo what Ed's saying is like, can we start tying use cases into code? Because I think as we go through, that will sharpen up the use cases because we actually say this is how you implement them. And I think to Fred's point, we'll identify gaps. Yeah, definitely. I definitely think so. And I think we almost, I think we have enough of the plumbing implemented now to get to that point of doing the actual use case and stuff. Cool, awesome. So not to put you on the spot, Pratik, it sounds like you may have ambitions this week as well. I'm sorry? You may have ambitions this week as well. Yeah, I haven't looked through the whole code yet. I found some of the issues, so I was just trying to tackle them. So whatever I find across, I'll try to add PRs accordingly. And if you guys have any particular tasks, I can take a little help to help. Cool. I know that Kyle, you made an attempt to get some issues that would bring in our friendly in the GitHub repo. Yeah, yeah, definitely. And so Pratik, take a look at the issues there. And some of that stuff is relatively simple and some of them are a little bit more complex, but definitely. And if you're, I don't know if you're on IRC or not, but jump on IRC and get on the Network Service Mesh channel as well, because that's another great way. Sure, I'll try and join today. Cool. There's a ton of discussion that happens there. Yeah. Cool, awesome. And the other thing I just wanted to just get at some thoughts around it was, what was the rationale of not running the demon set for the Network Service Mesh on all the nodes? Like we have to label the node specifically for it to get scheduled on that node. So I was just trying to know, is there any specific reason behind it? I don't think there is any at all, really. That was just a simple way to get it going. And I guess the other way is, and actually this is probably a good discussion point, by doing it that way, at least initially, we're not requiring people to run it on every node. We don't come in as being onerous that we have to run everywhere. So I mean, what are the ways to approach that just to make people's lives easier? In the ML file you're building, you effectively have it just applied the demon set everywhere and then comment out the stuff with, put a comment in section with the PC would uncomment and instructions if you want to run on a subset. That way it's really clear, okay, if you don't want to run it everywhere, this is how you do it. But I suspect like for the kicking tires put of you, most people are just going to spin up a cluster and try it at this stage. Because when I just tried, I saw it's a demon set, it should just go on each node. Then I saw, okay, I have to put a label because there is a selector. So then I was a little confused. Is there any rationale behind putting it on specific nodes or it can come up on all nodes that you're talking about? No, it can definitely come up on all nodes. If you wanna submit a patch or pull request for that, that's fine too. Okay, sure. Cool, thank you. Yeah, that's my opinion. It should be a demon set and we should have at least, well, we should have one running on every node that has a cubelet, so. Okay, cool. Cool. Awesome. Anyone else have any ambitions around code for next week? Cool, use cases. Anyone have use case ambitions for next week? That's not surprising given that Prem is in the air right now. So when I spoke to him yesterday, he had quite a lot of ambitions that he was excited about, but I'll leave it to him to sort those out. I will try and take a look at the code and use case and see if I can draw a line between the two. I'll probably paint Fred and Kyle. Cool. I can get some cycles, but my next week is kind of a little. We know how that goes. But the questions are ambitions, not commitments. Awesome. Cool. So anything else that folks have in terms of action planning for next week? They think we should capture here. Other things that people are planning on doing? Excellent. So we now have a couple of things that we're gonna do for the next week. People are planning on doing. Excellent. So we now sort of get to the section of our meeting where we've got open space to talk about some of the conceptual issues here. I don't know how many of you folks were there. I did give a presentation to SIG Networking yesterday. And in that presentation, there was an amazing conceptual breakthrough in how I'm explaining this. And that conceptual breakthrough was, I remembered to explain what the data plane is doing, which I'd previously not done, which I feel a little bit silly about. So, we can sort of walk through conceptually any sort of stuff that folks have they wanna go through. Any sort of unanswered questions, areas we need to explore, et cetera. You don't have a lot to speak at once. And I'm perfectly happy, by the way, to conclude the meeting early if we've run through the business that folks have. I just wanted to make sure, I always wanna make sure I leave the door open because every time people come with questions, the way the explanations get clearer to other people, right? So, those questions end up being intensely valuable. So, maybe one thing, I was in the meeting while listening in. What are the next steps, are it? I mean, we've done a good presentation. I think most people got it. How do we follow up to a line? Yes, I think there are sort of two things. And in some sense, when Tim asked that question yesterday, I was having such a good time, I borked it. So, I've, because it really comes down to if you decide, I think we're on a really good path in terms of actually getting shit done. On a formalism point of view, we need to figure out whether it makes sense to seek to be a working group under SIG networking or a CNCF working group. And my sense is that it's probably in part a question that should be posed to SIG networking to see what their opinion is about where they'd like us to be because you could look at it one of two ways. You can either say, I mean, in fact, I think you heard Tim on the call, look at it one of two ways. You can either say, this is completely orthogonal from what SIG networking is doing in Kubernetes, which is actually a good thing, in which case we might want to be a CNCF working group, or you could sort of say, well, that's true and it's a good thing, but it's also true that SIG networking could gain benefit from building some things on top of this when we get there. And both are, I think, are valid points of view. It's just sort of a matter of what that community wants. Does that make sense? Regardless of which way it goes, there still has to be, so I think fairly close working between the two groups because I don't think it's, they're not ships in the night. They definitely have to have at least awareness. Minimum, maybe common tomology. Well, I do probably dispel service function chain, God damn it. Hey, Ed, do you, I might have missed this because I was out and traveling yesterday, but do you have a link to your presentation sort of saying that's the connect? Yeah, it's actually, it's picking up now on the Legato Network Service Mesh GitHub. So if you go to the repository, it should be listed there in the readme.md. All right, thank you. And as I said, I feel silly that that was the first time I actually explained the entire data playing part of the story. Ed, this is George. I have a question. Yeah, actually, John mentioned earlier, what's the relationship with the Network Service Mesh with Service Mesh? Well, so think about it. Hang on, I actually have, I think a good slide that helps with visualizing this. Let me see if I can take it out. It's really an issue of layers. Load, load, load, load, load. Hang on. Okay. So, let me see if I can find the slide I'm looking for. Here's the slide I'm looking for. It's really a matter of layers, right? So, what Service Mesh is working with is primarily L4 through L7, right? So it's the, how do I proxy TCP ports around? How do I route HTTP two messages across a variety of available TCP connections? That kind of stuff. And it does that really nicely. And then what we're looking at in Network Service Mesh is primarily things that L2 and L3. So I have ethernet frames that I need to treat as payloads or IP packets, that kind of thing. Yeah, I understand this part. So does that mean Network Service Mesh will extend Service Mesh, which basically inheritance everything we had in Service Mesh or you basically select one or the other? Oh, they should be utterly compatible. Now, Modulo, a couple of comments that John had made where if he wants Istio to be able to operate over some network service that's been plugged in by Network Service Mesh that Istio doesn't understand, there could be an understanding mismatch there, but I think that should be rectifiable because the Istio guys do have ambitions of not solely being tied to Kubernetes and being able to do Istio service meshes across multiple clusters like that. I don't think it's the Istio control plane that's the issue because it has pluggable modules. It's the data plane piece if, because Istio can plug into NGX, Envoy, a bunch of others. It's a question, how do you get the information from the data plane into Istio? So the data plane is now Network Service Mesh, how does it talk to Istio and how does Istio apply policy into? Well, and I would also say the question. I'll shoot it, I'll shoot it. I'll even shoot it. I mean. I'll shoot it because the thing is, I don't think Istio naturally understands very well issues at layer two and layer three and neither do I expect us in Network Service Mesh to have the first clue about stuff that happens at L7, right? So, you know, there's an interesting open question to think about of whether Istio should be aware of Network Service Mesh at all and if so, in what manner? But I think clearly the goal, George, is let the Service Mesh boys handle L4 through L7 and Network Service Mesh handle L2 and L3 and then we get a nice traditional layering scheme the way we all know and love the networking. Yeah, yeah, that's why we need a Network Service Mesh because we're missing the L2, L3 layers. The question I have is does everything moving on in the Service Mesh will automatically pick up by Network Service Mesh? Yeah, and I think that's what John was getting at is what does Istio need to know to operate over the Network Service Mesh? So, think of it from a layer point of view. I think you have the Network Service Mesh and then you have the Service Mesh that you would like to be able to work over the Network Service Mesh but that doesn't really know as you understand what's about what's happening there. Does that make sense? Yeah, yeah, yeah, okay. Yeah, cool. Pardon me. Anything else folks want to talk about before we conclude for the day? All right, cool. I will see you guys next week. Thank you. Thank you.