 Let's go. Is there anyone who's willing to share the meeting notes on screen so we can follow along? OK, I probably can. Let me clear a few things up and away we go. All right. Can everyone see the share? Yep. Cool. All right. Fred, do we want to dive in? Have we lost Frederick? Sorry, mute button was on a different screen. Anyways, so let's go ahead and get started then. So please either self to the attendees list if you have not done so already. If you need help with adding yourself to the attendee list, let one of us know and we will add you to that in case you're dialing in or otherwise. OK, so agenda bashing, is there any conversation or any topic that anyone would like to have that is not on the list? OK, I'm going to assume that people are either on mute or don't have anything to say. So events we have at the end of this month, Mobile World Congress. So I was on mute, as you say. I'm just saying that asking for an architecture discussion and it would be nice to have it this week if possible. Sure, I'll go to the agenda. So I think the agenda list are not that long today. And I think we should absolutely, I think we should get to it. You say that with such faith. I always say it with faith. Once more in with feeling. So Mobile World Congress in Barcelona. So these have a tendency to aim more towards demos that are more service provider centric, NFE use cases and so on. So I will be heading over there for the conference. So if anyone has any recommendations on people that I should go talk to or companies I should go talk to in terms of getting them added to our network service mesh fold, please let me know. And I will go have discussions with them. Or if you're going to show up, we can go have a drink together. We have service mesh day coming up in San Francisco in 2019, which reminds me I need to put in a talk on that. So if you're in the area, feel free to attend. I think that that'll be something that it'll probably it'll definitely be more application service mesh oriented, but we're going to we're going to try to expand the scope a little bit. So we have ONS North America and San Jose. The call for papers is closed. That call for paper notification is not accurate. That's that's a little bit more aggressive. So we need to find out what the real dates are going to be. Anyways, that will be in San Jose on April 3rd through 5th. And we have a couple of talks that have been submitted. So we will see if they manage to get in April 9th through 12th. We have upper side conferences with the MPL, SSDM, NFE in Paris. So I don't know if we have any related talks there, but some of the content may be interesting. We have Container World. The call for paper is closing on February 8th. And this is in Santa Clara. So another one that we should definitely consider that we should definitely consider submitting in for. I think so we also have a co-located event at KubeCon. Actually, the Container World one, I think that seems incorrect because I recall the service match day being February 8th. And I think Container World had already announced their schedule. So we should not actually agree. Yeah, so that's the wrong location. So let's move that up. So the paper got accepted and Frederick also, I'm going to talk about the basics of network service mesh. Awesome. So I'm all OK. So one other thing probably I want to add is, OK, I'll let Frederick complete. We are also looking to submit a demo in the Elephant Booth for ODL and network service mesh integrated together. I just want to add that to the list, too. Can you add it? I'll add it, yeah. Cool. Yeah, so I think some of the stuff got mixed up. So I've fixed them up. So the service mesh day call for paper closes on February 8th. The Container World has already had its call for papers complete and the schedule published. And we have a network service mesh discussion or a talk that is being driven by a prem from Lumina. So we also have KubeCon EU in Barcelona, Spain. The call for papers for that is already closed as well. We're waiting for the notifications. And finally, we also have some co-located events that will occur, such as the Fido Mini Summit. The details will be announced later on. And we also have a KubeCon and CrowdNativeCon and Open Source Summit to all combined into one in Shanghai, where the call for paper closed on February 15th. So if you are in or going to be in that area, feel free to submit a talk. And if you need help with content, let us know and we will help you. And finally, we have Open Networking Summit ONS in Europe, which is going to be in Antwerp, Belgium. The call for paper window for that is still pending. So we'll get more information on that later. We don't have any announcements listed. So let's jump straight into the agenda. So we start off with STARS. And so we have one of our minor goals coming up is that we would like to be a part of the CNCF landscape. Part of what the requirement is, if you're an open source project or one of the triggers to become a CNCF landscape project, is to have lots of STARS. And that number is 300. So invite any people that you think would be interested in starring our project to come in and start. So we can jump it from, I think it's around 600, or sorry, it's around 60 at the moment. So we need to bump that up to around 300. So feel free to invite anyone else that you can find who would be willing to help with STARS so that we can land on that page. Is there anything else you want to add to that, Ed? Or is that good? You might be on mute. You are on mute. Yeah. So I'm adding a link to the repo here real quickly. Let me make sure I get it right, actually. So you can go click to the car. Yeah, folks could go. And if you haven't started the project yet, if you could please go start. Cool. Next, we have some specifications. So there are two documentation efforts I would like to try to get the community to drive. So the first one is a high level component view. So in other words, things like we have a network service mesh demon. We have a client. And we have an endpoint. So just what are those high level things? So on this, Frederick? And not to gang up on you and Ed with Ian here. But I mean, I know that there are several of us that are more than willing to help work on this. But I've been anecdotally harassing Ed on Google Chat trying to get answers to questions and collecting data. But actually having a full run down of like, these are all the bits. This is how they all fit together. Here's what we want people to see here. Here's what we want to gloss over, et cetera, et cetera. I think it'd be really helpful because what I don't want to do is spend a ton of time documenting things that are wrong, if that makes sense. That makes total sense. I guess the question is, what's the most expeditious way to run through some of that from your point of view? Because what I could do is go take some time to actually jump on a call with folks. We could sort of walk through some of the what's there. We've got a reason to do that on the abstract architecture, et cetera, that we can talk through and sort of get to the particulars. I could answer and grow down on the questions around that. You guys can start generating the basic things like a gloss on everything also. Yeah, that's kind of what I was thinking. We can either do it in this Tuesday meeting or maybe we set up a specific meeting with the people that are in that little group that we started. But obviously, you don't need to create an architecture that then present to us because then obviously that defeats the whole picture. I was just going to reuse some of the existing pretty pictures that I've got. Yeah, and maybe walking us through some of the code and stuff. Yeah, I think it's important that your pretty pictures, if they were going to solve the problem, they would have solved it probably by now, which is why we need your input as well. Oh, no, no. I'm not saying just go look at the slides. I'm saying let's jump on a call. We can talk through some of this. I'm happy to walk through some of the code. And as we need graphical aids, I've got a bunch of graphical aids on hand. Yeah, I mean, I'll definitely reuse work. But I think that would be helpful for us to just get in an environment. Let's look at the different processes running. Let's take a look at some of the manifests. Let's take a look at how things are stitched together. Like, if we're going to describe what a network service endpoint is, I need to legitimately know what it is. Absolutely delighted to do that. Let's do the following things. I'm happy to sit down with the folks who want to do the documentation. I do want to make sure to keep the actual calls public so that anybody who wants to come in and participate can. I suspect that this too big call is getting sufficiently crowded that it probably won't be the right venue. Because we do tend to be running up against the line. So I guess the question is, would you be willing to go start a doodle poll for a time that we could put together for folks to get together on a call and talk the documentation to the volunteers through whatever it is they need to be talked through? And I'd be happy to do that. And we can basically do an hour a week until you guys are satisfied. Long as docs are coming out of it, it's a more than useful investment of time. Yeah, I think that would be helpful to have like an architecture review call in conjunction with just because I mean, this one, we're focused on who's speaking at what future do we want to add. So I'm more like technically focused call in parallel with this one, I think would be helpful. Yeah, so if you could go and kick off like a doodle poll or something, make sure to CC the public list and we'll just sort of figure out a time and start doing it, sound good? Yep, sounds good to me at least. Yeah, and so the reason I added some of these was exactly for this reason. I want people to make suggestions as to what will be helpful for them. So my thought was we could probably start with a high level component of you. But if there's some specific area that will be much more effective, then we absolutely should start with those first. I mean, one of the things I want to be super clear about is effectively the folks who are working on putting into the documentation in some sense have a much better view of the kinds of gaps in the current explanations than I do or than I suspect Fred or Nicolai does. Because if we magically knew what the gaps were, we would just go fill them. So it's super, super useful to have another set of eyes looking at things, asking questions, and figuring out how to express them. Yeah, I fully agree with you Ed. In fact, I'm also talking to full of the folks and I'm getting questions collected out of it. For example, one of the things that has come in as the security aspect of it, yeah. So yeah, it's important that we collect it and then we document those. Agree? So I've also added a request for a coder, but it looks like Nicolai has already filled in an assignment. So basically, we have a readiness and life probe that has been specced out. It has a very easy first project or first thing to do. So sorry for the one question I have with respect to the to do items. Do we need to have special privileges to access this because I don't have the add God option. So let me sort of, this is a bizarre detail of GitHub. So basically, if what you're talking about is I went to try and assign myself to a bug and I couldn't. Effectively, with GitHub, you can assign yourself to an issue for a repo. In fact, I can't assign you to an issue for a repo. If you're not listed, at least as a read-only collaborator. OK. And so if anybody is like, hey, I want to pick up a shovel. Could you please add me to the list of read-only collaborator? Just give me your GitHub ID. I'm happy to do that for whoever wants to, right? Because then you can assign yourself to issues and that kind of stuff. Sure. I love that. Thanks. So yeah, I understand why GitHub is that way. And it makes sense. And they're actually right. But it is a little inconvenient at times. So we have a minor Slack revolt going on in the chat, by the way. Let's see. I could tell you that there are potentially minor complications on Slack. I would have to check and see if you go basically. Slack has a pricing model that's a little bit unfriendly for open source projects. And I know that that has been somewhat solved by other projects in CNCF. And I could see if we could solve it in a similar way. But I have also heard people mention spectrum.chat as an alternative to Slack. So I'm totally open to having the discussion if somebody wants to add it to the agenda. Yeah, my personal preference at this time would be to keep IRC until we join a project like the CNCF who can put the bill and manage users and all that kind of stuff. It turns into having yet another tab in the Slack app. And when I already have 10 to 15 tabs, and it thinks, yeah, unwieldy. So yeah. But I suspect Slack will end up happening eventually when we join up with a group like the CNCF or someone. Quick question for the folks who are highly in favor of Slack, which is a totally cool thing, right? So is part of that because IRC is either a pain in the ass to access or blocked by your employer? The first one. And also, I have the inverse opinion of Frederick. I'd rather have 15 channels than one chat application as opposed to having 30 different chat applications. Yep. No, I invite. That's another one of the argument, is it? The biggest issue I'm finding with IRC so far is when you actually chat in there, nobody answers one. But two, when people chat in there, then unless someone is recording the channel, I haven't actually seen a record of anything, which means that it said to people who are present at the time and then it's lost for all time. Yeah, so there are a couple of things there. I'm not saying these are, by the way, perfect responses. So one of them is a lot of folks use something like IRC Cloud, which keeps you always on and therefore keeps an always going record of the channel. The other one that I will mention is, and this is sort of weird. So FreeNode actively asks projects not to build public archives of their channels. And that's a little weird, but most projects I know have actually decided to respect that so far. Right, so maybe the easier answer here rather than chat app is we get a lot more consistent about using the mailing list for things. I find mailing lists enormously less productive than chat apps. Well, they are when you don't use them, yeah. So I mean, it's one of these things where quite honestly, that the communities fall in different places, depending on the personalities of the communities. I'm happy to engage with folks on whatever medium they reach out to engage. So for example, I've responded to, we've had a small number of emails that have hit the mailing list and I've been careful to make sure to respond to them. So, you know, if folks reach out to the mailing list, absolutely, that's fine. But most people prefer a more direct collaboration. As to folks that answering, I know that the channel is actually pretty flipping active, particularly morning in specific time because we've got a lot of folks working in Europe who are on the channel then. So, you know, lots of stuff goes on on the channel. So I'm not quite sure. Well, let me highlight where this tends to be a problem, right? Jeff's trying to document the architecture which has been agreed among people, presumably largely on IRC, which means we've got no record of the agreement or why they chose to do it that way or anything. That I think is a fine example of why we're running into problems right now. The agreement was from last week in the meeting which you weren't present in. So that was not actually on IRC. No, I mean, that may have been, that was in fact, last week in the IRC, meeting the agreement we'd document the architecture. The agreement on the architecture goes back a long while before that or else you wouldn't have written any code by now. So my point stands that there's some of this stuff where there's early discussions that are known by the people that have it. Well, in part of what's actually happening right now where we're trying to get better about that and I've just switched to the spec board is we're trying to encourage the writing of specs. So that they're actually structured in a way that's intentionally designed for there to be conversation around them. And sort of a good example of this right now is the monitor, basically the metrics piece of the spec that's been going on. And I know Matthew and I have had a very active collaboration around that. So basically we do have spec issues that are coming up. We do have the review of the spec board that's literally the next agenda item up in the meeting. And the way they're structured typically is there is an issue. Then the issue will point to a Google Doc because it's easier to collaborate there and then once we figure out what we're doing that will roll into the issue. So it can be executed on. Yeah, that's the end because you're correct. Like people talking on IRC it's like even finding it in the past even if you have an archive it's a pin in the ass. So that was the idea towards driving towards these specs was let's find a way to get the community to surround and to start working on things in an area. And I don't wanna tell people you can't talk on IRC or you can't talk on Slack or in person or whatever but whatever comes out of that should drive through this particular process. And we'll evolve this process to be more accommodating to people as well over time. So right now we're just getting started with this. Well, I mean, the other thing I wanna be super clear about is this is also not the open stack looper in process. This is a way to try and encourage collaboration. It's not a mandatory gate in the system. So a lot of us who've been very active are trying to utilize it because we think it improves community engagement. But if you have some piece of work you need to do and the notion of going and writing a blueprint or a spec or whatever seems overwhelming, you don't have to, right? But it will probably lead to easier contribution because the spec will also often point out details like, oh yeah, if you were to change that this over here is probably also a good thing to change and that kind of thing. So, okay, so we seem to have muddled the agenda somewhat. I think we need to make sure that we capture having the discussion about Slack in the agenda. If someone could please add that. And then we have a couple of other items as well. Everybody on the per Slack team, okay, with making sure we capture it on the agenda and then we'll get to it here shortly. Sounds like that's the, so it added the per Slack group on there. So let's jump into the NSM release process. We still have a lot to go through. So, Nikolai, you're up. Do you want the share or do you want me to just, you know, follow it? I prefer to share. Take it then. It's all yours. Okay. And it shouldn't be in this one. Okay. I hope you see my screen shared. At least the browser. Okay, so in sync with whatever we already said about the specs. So I have initiated this NSM release process spec and it's in a doc here. So this is just a quick kind of announcing that this thing exists. Please go there, read it. We have at least one thing that we would like the voting from you. It would be the code naming. So we have two ideas already. And if someone has more ideas, please add them here. I know that we can talk on Zoom or whatever, but as we said, it needs to be documented somewhere. So, yeah, I don't want to go into much details here. We already exchanged some messages with it here. There's some proposal for the release numbering scheme when we call something stable, when not. Then we have a list of release materials which we are improving. We have improved a little bit and we are continuing to improve. So if you have some other ideas, please add them here, at least some message, if not directly editing the document. And then we have a kind of proposed dates which look good so far for our first release. So we have end of April, as a kind of set as a target date and then kind of a first patch release just before the KubeCon. The overall idea here is that we have, and this is actually the second thing that I wanted to share here, is that we have a project board here where we are adding the to-do-in progress and done items. And my proposal was, each of these were group calls. We do a quick, even for five minutes, just to go and see what is in to-do if you want to add something more in the to-do. And kind of just move the things around here and groom the, and have a realistic view of what's going on with the release. I think this is good. I would love to see more folks participating in the conversation about the release process. The other thing is, as I sort of look at the release process, there are a small number of things that we absolutely have to agree as a community on. A lot of good stuff in the release process, but there's a small number of things we have to come to an agreement about. And I'm not suggesting we necessarily agree about it this week, but we probably should next week or the latest the week after. And I think those things fundamentally come down to, what is the intended release date? Are we okay with, what is the intended throttle branch pull date? Are we okay with the branching structure of just pulling a throttle branch or a release branch from master? Which most people do, so I expect that's okay. And then the one thing that I actually kind of like about this, but I'd not seen before, is the notion of essentially treating the micro version zero as a release candidate, and the micro version one as the first stable patch on the thing. I like that very much, but it's not something I'd seen before. So I want to make sure that that's something that we all understand and agree with. Does that make sense to folks? Don't all speak at once. So there is no hard far or beta release? Go ahead. Yeah, not really. So the idea is that, okay, so this, we have a concrete dates here. So it's maybe easier to explain. So the first, so like one week before the date of the tag of the first tag, we create the branch and we start ensuring that everything that we have in our to-do list here is kind of done or nearly be done. And then one week later, well, this is on the 30th, we essentially tack the first release. We start turning code CICDs extra testing whatever we find appropriate. And then we probably find some, you know, small issues, I don't know, bugs, performance, whatever you find. So two weeks later, after we are kind of completely sure that we run through all the things. I mean, this is not completely specified. It's what this extending testing would be. It's just like a way to say, okay, you have two weeks to, before you declare that this is something that you consider stable. And once it gets to 1.1 or actually 0.1 as a minor, it's not the patch, it's actually the patch version. So once you get to patch version one, you say, okay, so this is stable. So this is something that's proposed here. It might work, it might not work. It's kind of driven by the fact that we want to go with the same versioning. And I guess that adding some, you know, key characters in the end, like if you want to add here kind of beta or something like that, that's not exactly fitting into the same way. That's actually kind of part of why, even though I've not seen it before, I sort of found some nice benefits to this approach. But I do understand, I think the point that was made about wanting to have an alpha or a beta version, in some sense the, you know, in some sense people call them release candidates, et cetera. In some sense, I think what you're suggesting is that we treat the microversion of dot zero as essentially release candidate or beta, correct? Yeah, yeah, something like that. Does that sort of match the concern of the folks who raised the alpha beta thing? Actually, it really makes sense because we used to deploy on the dot one for all the two reasons and so on. Yeah. So what you're saying is you wouldn't trust us anyway until it's dot one or not too, so yeah. I think that's the wisdom of this approach. But, okay, just please go there after comments that's what we want and what we need the most and the better. There's a ton of other good things in this document of interest, sort of a small number of things and they're good for setting direction and I like them a lot. But the sort of the community agreeing on sort of the dates and the basic sort of branching, versioning structure, those are things we have to all agree on. The other stuff is much more malleable over time. I mean, it provides good guidance and focus but if somebody comes in and says, well, you know, I fervently believe that the first release should have SRV six and so I'm going to go work on that. Well, okay, sure, that's not actually a problem but if somebody says, oh, your release date is terrible and by the way, I want to use your month-to-month versioning, okay, that's not something we all have to agree on, right? Yeah, that's... Okay, so yeah, I guess that we're going to go through this next week again and I hope that we get some comments by then. Yeah, and my hope is that next week that we could sort of come to agreement that, okay, these will be the dates of the branching and versioning structure, et cetera. So particularly if folks are, you know, have differing views about that, like that train is leaving the station probably next week. Yeah, and even just a simple plus one somewhere just is somehow voting can think, okay, higher proof. Yeah, it's super, super helpful when you're trying to plan things. Just if you just agree, just agreeing silently is so much less helpful than agreeing vocally. Okay, good. And the last thing that I had in my list for announcing is the existence of this new repo under the network service mesh GitHub. So this is an attempt to have a separate infrastructure which will allow you to easily build applications if you want to call or kind of full-featured examples and proof of concepts or demos while relying on as less as possible from the full network service mesh repo. So there is some documentation here. Of course, a lot of this structure was copy pasted from the original repo and a lot of the things were just cut down and stripped down to the very, very, very basic minimum that you want. So essentially you can spin up a vagrant and then you can just deploy the basic infra and it's all in here. But I just want to show you what the, I didn't get an example is, no, it's not here. Should be here, sorry. So there is the read me which explains what you need to do, but it's very, very, very easy. So there is this make file which essentially just describes the name of the example then all the containers that you need to build which should exist as folders here under the example. Then all the ports that are going to be deployed. This eventually reflects to the naming of the YAML files that are going to be applied and then you can add a check command. And from there on all the targets, all the make file targets that are needed, they are generated automatically by just including this target make file. So the purpose here was to really make it easy without knowing much and went out going into all the details of the built infrastructure to just be able to go out all the pieces that you need. I particularly like this because there's a psychological weight to a long make file, even if you only have to change something in the first instance of it. And so the fact that there isn't a lot of lines visibly following I think is going to also be good in this case. So one of the things I did want to comment on, I'm actually super happy that we've now got the examples repo. And while I don't think there's any great urgency, I think it would probably be good to migrate some of the existing examples in that direction. And one of the things that's actually gone on in the main repo is even though the main repo is only about 22,000 lines of code, thematically it's kind of gotten a little bit big. There's just a lot of things in the main repo. And so starting to more intelligently use other repos is probably all for the good. Mm-hmm. So one of the things that I want to just start explicitly here that here we don't build any of the core containers, so NSMD, the data plane, all these things are essentially pulled off the GitHub. So for example here in the if, when you want to build the monitoring, these are just the same YAML files that you'll find in the main repo, but they just rely on the fact that you can download the containers of the Docker Hub. Yeah, so that was it. One of the things I did want to throw out there, we've got a lot of make file machinery and it's done lots of useful things. But one of the things, if there's someone who's interested in working on it that would be super cool is, a lot of the stuff we're using the make file machinery for probably could be better done with Helm at this point. And so if you're interested in doing some Helm charts to sort of make things go more smoothly, that would be super welcome. I was going to say that next, but you beat me to it. Another nice property. The most authoritatively is that the make machinery is my fault, so. Another nice property of this as well by having the examples and migrating them here is that we want to tell when we're breaking the community and as we know about all changes are structural in nature. So if there's something that semantically breaks with in a change, then these examples act as integration tests where since they haven't been refactored or as part of the main core, we're more likely to catch those type of behavior changes and be able to report them as breaking changes or make a decision to refactor back to the non-breaking version. So I think even though this will cause a little bit of extra work on the CI side and in the core development side, I think it has immense benefits to the community from a CI perspective. Yeah. Okay. So please, I tried, sent issues, suggestions, more than welcome. Okay. So with this, I'm actually stopping my share and go to the screen here and who's next? Okay. So we, Ian, do we have enough time to discuss your it's been put off for four weeks. So let's do what we can do in 10 minutes because I think we've got a fundamental issue here. So basically what we have at the moment as a theory is data planes connecting service endpoints to service clients. And that has its uses and as have been demonstrated at KubeCon as an example. And that's fine. But the issue I run into is that it seems to me that firstly, we've got two problems with this model. One is that we're talking endpoints and clients as if there's producers and consumers of network of networking, which I don't think is how the world works because if you look at physical networking then when you connect a router to a switch you don't call one an endpoint and one a client. That's not how it works. So there's that. The other one is that as things go the data planes that we've built have been using Kubernetes networking as their connectivity just to make a point and get things passed out the door, which is fine as far as it goes. But practically speaking if you're doing, if you look at NFE as an example of how fast networking works then 50% of this is done with SRIOV interfaces for speed and also you can't ignore the physical infrastructure when you're trying to work out what connectivity you've got. So if my Kubernetes boxes are all connected to a router versus a switch then that gives me different use cases, different behavior or the same data plane won't necessarily work with both. So I think the question I had was and I think the reason we've managed to avoid this so far is we haven't really looked at the lowest level of this, what we're building from. Oh dear, I think my headset's gone flat. Hello. So here you are. All right, that's fine. My headset just went. You're definitely not talking to me entirely. All right, so my take on this is and the reason I wanna call this an architecture topic is I think what we're doing is we're building the house without building the foundation. We've got a fairly good idea of what a service mesh would be for people who wanna consume it as a mesh but we haven't got a way of building a service mesh because we haven't talked about the low level building blocks that we ought to be building with. Providing without giving them privileges providing a container with an SRAV interface to take one example. Providing with container with a MEMIF which isn't really or providing two containers with a shared MEMIF. That kind of thing. So that we're building on a platform that lets us do the sort of mesh of things that people are excited with. You containers with shared MEMIFs has been working since October and November. I understand that, I wasn't saying that it didn't work. I was saying that it's not really a separable building block. I mean, we talk at the moment in terms of a data plane that connects a client to an endpoint and setting aside the point about they're not necessarily being things like clients and endpoints. There might not be a data plane involved here either. You might just want to connect two containers together for fast networking purposes on the same host which you know are on the same host. So, and I know you know that I had some questions around this too, right? Like, because MEMIF works great if I'm on the same host and there are gonna be instances where I will dictate to an application dealt with microservices that hey, I want to make sure that this is how you deploy. But I mean, this whole concept of us going truly cloud native, right? Like not recreating open stacks problems too is I would also like the alternative if I do have an extremely fast and powerful fabric sitting above this. If my network engineers have taken the time to actually put this whole spine and leaf thing above me then I would also like the potential ability to intelligently create services where the containers have their own interfaces and I just use the physical network northbound to do all this, right? Without necessarily introducing a bottleneck with a network service endpoint. And I think Ian and I are kind of both wrapping our heads around this from like different angles but are trying to reach the same goal yet. I didn't overhear you. So here's the thing I would sort of say there, Jeff. Jeffrey. Mm-hmm. Definitely the ability to treat an SRIOVVF as the sort of local mechanism that's called the computer container. Definitely that's something that needs to be done. Yeah, but how do we treat the switch as a network service of some variety an element of the way that I have been conceptualizing in my own mind, the physical network around this is that if you want, and this is one way to conceptualize it is if you think of the Tor port, the top of rack switch port, that let's put aside the SRIOV for a second. Let's just talk about the physical neck. We'll come back to the SRIOV because I agree it's important but the story is easier to tell originally without it. So if I just have a physical neck that I've plugged into my container as a local mechanism, I would actually like it to be the case that it gets from the physical network as some kind of treatment, right? And from my point of view, effectively what that means is that the Tor port that that Nick has plugged into is in fact providing some kind of a network service that you would like it to provide. And so that's how I've been sort of conceptualizing this so far. Give me just one second and let me dig up the sort of scribbles I've done so far and see if they help. Well, to take you from there, right? Then that says that a data plane is a network service and a data plane consumes a network service as well if it's using the Tor to actually get its connectivity host to host. And if that's the case, then I don't necessarily use a data plane to connect one network service to another. So my point is that our lowest level of building block before we start talking about meshes, you know, and I use the equivalency here and I don't want you to get distracted with the way that we're integrating with Envoy. But my point is that Envoy is a form of networking that builds on Kubernetes native networking. So there's a high level and a low level. Feels to me here, like we've got the high level but we haven't got the low level here where we've got things that, you know, don't need connections that reach across from host to host. We just need touch points, if you like, very local things that are on host or host to network. But it also feels like if we take, if we look at what's inside a network service, then we've clearly got a control plane element and a data plane element. The control plane element has a certain API that we like and we respect and is, you know, defined by NSM. But the data plane element could be, you know, VPP in a container or it could be the kernel being controlled by a container but it could equally be the physical switch or the physical router. No, no, I'm completely fine with all that. So I've been trying to break off some time to draw, to sort of lay out some of the pretty picture pieces. I've got some scribbles that I could talk to if that would be of assistance at this point, if folks are interested. Is that something you guys would be interested in at this moment? Yes, I have similar questions and I'll be interested in that. No, no, it's a perfectly valid set of questions. Yep. You know, and sort of part of the reason I haven't written them down yet is I first need to write down the inter-domain stuff for it to be. Let me ask you a different question before you start down this. Could I run NSM with no data plane at all? No, not currently. Why not? Because something has to be able to handle making sure that you're actually connected to the network service that you want it connected to. And it's local. Something has to be able to do that for you. But if it's local to me, if it's local to my host or, you know, again the physical switch, then I don't need a data plane for that. So I think perhaps the semantics of the word data plane are getting in the way here. So I'm looking at your definition as you've written it because again, your code defines it at the moment. Hello, I'm sorry to interrupt. I having a lot of trouble hearing, but I think this is a conversation that's real relevant to me. So please yell at me if I, or yell very loud at me if I am stepping on or repeating something that was already said. I think some of this might be clear if we implement bare metal services and infrastructure on like for example, on packet.net and put VPP on the data plane and then use memf to connect to the pod and then have, and then the data plane, the network service match data plane abstraction would quote no unquote, how to tell the real VPP data planes, for example, how to set up the kinds of connections we're looking for. And I think most of that is probably there, where we are. But to me, that's connected in some way to the SRIOV discussion, the MPLS discussion that Ed and I have that I think if we implement our real network, like for example, in the CI, that way. So I mean, you have a severe, some of this might fall out and I'm trouble hearing you, Ed. So I'll speak extremely loud, not because I'm in any way upset, but to try and help Tom here. We have a challenge here in that until either, so one of the following things have to be true. We either have to use a mechanism for talking to the next, it's not DPDK or we have to, or DPDK has to change to be more tolerant of not getting the numizome that it wants or Kubernetes has to allow you to actually manage the numizome correctly, which it doesn't currently and I don't know when that's landing. Or we have to use some alternative mechanism, like for example, we do have native AVF drivers that can talk to SRIOV NICs without the DPDK problems and we could potentially use those. But there's one remaining option. We could turn off the second socket on the server, but there's a fundamental mismatch where DPDK is very insistent, the world has to be precisely the way it wants it. And that's just not how Cognitive works. Yeah, that's fine. So this is good, right? You're naming things that need to resolve themselves in order for us to make things useful and that's completely acceptable. So where's that on our tracking? So I'm happy to put that on the tracking. None of those are actually problems that lie within our hands. One of them is... Well, I've got pair of hands that we have a lot to do. The DPDK community has to change its mentality from, we are... It can be a part of the DPDK community as well. Maybe I can go from an end user potential potential end user view point is, I agree a bit with Yen, is NSM as a mesh would reside whether you decide to have, like I'm a fan of SRV6, but you could have all your hosts running SROV ports or you can actually hardwire the NIC directly to a container if you want to. The notion of being able to abstract how the service mesh is actually represented in physical is kind of independent. You have your pick and choose and you might go fast with the DPD but it ended up being sometimes you're gonna have something else. So those iterations of how to represent the data plane of an NSM will evolve over time. Being able to abstract that to understand that there is, for example, locality services. If an NSC is, for example, an SROV port, darn, you have to have the logic inside the NSM to say, well, if my client needs an SROV NSC, please make it on the same house. And the plumbing underneath it needs to make sure... That part is easy. The part that's broken and the reason that I haven't been driving the SROV so hard, part that's broken is that DPDK thinks, right now... Look, Ed, we understand that DPDK has got its problems and it's not going to work until we change it. That's totally fine. But my point is, let's stop avoiding this conversation because DPDK is broken. Let's have it because we've got to work it out to fix DPDK for this. We can't run it in a container. Then we've got nothing, quite honestly. No, the problem is you can run it in a container but you can't run it in a container in Kubrick in a pod. This is the thing to make it really clear. There's a huge gap in cloud-native networking. This is an essential example of that. Now, I have people who work on DPDK to try and get this fixed and I'm also tracking efforts in Kubernetes to have a cloud-native way of doing NUMA zones. I would be really happy to have more people involved in those discussions. But until those things get fixed, effectively the only way to run anything that uses DPDK in capacity inside a pod is to only have a single socket server. Yeah, fine. But when those things get fixed, then we come back to that question of, we're not really, you're building, we are building today a thing which will connect a network service endpoint to a network service client regardless of where they lie in the system. But I think we're building it from smaller building blocks which is things that actually neighbor each other and touch each other, whether that's a switch to a container via an SRAOV interface or a container to a container via NEMIS. So what I was trying to say is that I think we've got a lower level of abstraction that we haven't really looked at yet. And yes, maybe it's because DPDK is not ready for us but I think we need to be making our preparations. No, so I think basically what you're saying is you need to know what the story is for how this is going to work. And so we can find the architectural level, discover whether we are likely to have them for scene issues, sort of, you know, and try and get them resolved. And I think that's actually a super good idea and a super good point. The reason it has received less of my personal attention at this moment is I, you know, if I were to go off and write some code to try and do this, I literally can't realistically test it until either DPDK realizes that it's not going to have dictatorial control over the whole box. And so before we, before we continue on, before we continue on, we're already a few minutes over. So we should punt the, Oh yeah, I do have to, yeah, I want to say. I think, so I think that the key here is, I think, Jeffrey, you're going to get a doodle pool going to have some architectural discussions. I was going to say, so two things that is one, I have a direct line to the Intel development team. And they're always asking me what open source things that they can work on for me. So let's leverage the fact that I work for a giant SP and get people to work for us. And then second, I don't want to take a single bite of this and be told I can't have the rest of the meals. So can we save these slides and all that for like these architectural meetings since we're already over the hour? Yeah, I think that's probably the right place to do it. Absolutely. All right, so Ian, just, you know, get offline with me and I will look at flexing the charter muscles and seeing if we can get some Intel people to solve some of our challenges for us. All right, cool. Go ahead and include me in that as well. And thank you. Closing it up at the same time next week. See you all there. Later guys. Take care.