 Greg, I want to verify, am I coming across? Can you hear me? Oh, yes. You're very clear. Fantastic. Okay, also, I tried sharing my desktop using Zoom, and Zoom doesn't seem to like the latest versions of Fedora for sharing, so if someone can share the agenda, that'd be fantastic. Sure, I'll do that. Great, thanks. Can you see my screen? Yes. Okay, we'll give it a few more moments and then we'll start it off. Okay, well, let's go ahead and get started. So as always, let's start with some agenda bashing. So is there anything that anyone would like to discuss in this meeting that is not on the agenda? Please speak up and we'll get it on to the agenda. Well, I guess I'll throw something out there. So I see John on the call. There was one item that came out of my review of John's PR 247, so I'll add that to the agenda. John, this was the item of like, you know, kind of entry versus out-of-tree plugins, and if they're entry, the directory structure you proposed. I thought we should discuss that as a broader team. Yeah, I mean, yeah. Feedback would be good. And what I proposed was just a hack. It was not a, you know, well thought out. So more just that we want... No, no, no, it's definitely good to bring it up. And I think it's something we can evolve to. I don't anticipate maybe implementing something in the full way one way or another, but definitely something we should start to discuss, I think is as a broader team. Things kind of went quiet from my perspective. Oh, sorry about that. I somehow ended up on mute. Okay, so just a reminder, add yourself to the attendees list if you haven't done so already. And upcoming events. So we have crowd-native network function seminar at the open source summit on Tuesday, August 28th. So that is the Tuesday. So make sure that if you are attending and you haven't registered already that when you do register, you click the checkbox to help you get in. See, I don't have any special announcements. Does anyone have any special announcements they want to talk about? So we are starting to use dependabot to automatically push, pull requests to update our go dependencies. And so, does someone want to talk about that? Yeah, that was me. So I'm definitely happy to discuss. So this, so one thing that I've noticed over the last couple of months is a lot of times, well, I mean basically our dependencies end up being out of date fairly frequently. And I noticed this when, you know, I want to go in specifically updated dependency and I'll do depth insure and it will pull in lots of other updates on top of that, right? So, so I did a little bit of looking into this and I found this, this dependabot, which will, which is essentially it does exactly that. It will just simply go through and push PRs. And for basically this is the people that wrote this code, the code is, the code for the bot is all open source. They actually, you know, have a company around it called dependabot, but for open source projects, it's free to use it. You can hook it up. So I liked it because the code was actually available. If we ever wanted to do this ourselves, we could, but it actually, if you look at the two updates that I merged that it pushed, they're actually incredibly detailed updates that include all the changes from the new dependency in, you know, including a bunch of links and everything like that as well. So it seemed like something that was super useful. I also liked that it just pushes the PRs and we get a chance to review them and decidedly if we want them or not. So thoughts from anyone else? Seems good. Yeah, I like the idea, especially when you consider, you know, it's, so I assume it's using our, our depth and it's not like it's not updating outside of that, right? Correct. It's using our depth exactly and it'll, yeah, exactly. Okay, fantastic. Yeah. And one good trend that we're seeing in the Go community is that there is a push towards getting semantic versioning. And so the more the community adopts semantic versioning, the more useful this tool becomes. I agree completely. Yeah. So I can, so definitely in fact, let me, let me one second here and I will, I will show, I will post a link to, to some of these, actually hold on, the one way I can do this that's a little bit easier is this, just a sec. So here's an example and I'll put it in the chat so people can take a peek. There's an example of what the bot proposes and what they look like from a pull request perspective. So it's actually, the pull request is super. Thank you, Lucina, for sharing that. Yeah, if you click on one of those, you can see that the pull requests are quite detailed and include, you know, release notes, individual commits. Super, super useful. I was pretty pleased with this actually. I'm curious. I mean, you know that with the 111 Go 111, there's a new model of the dependencies coming in. And so I, I mean, with this, this will kind of become outdated or not used anymore. So are we, are we kind of sticking with the 110 going or we'll move to 111 at one point. So correct me if I'm wrong, but we don't, what 11 doesn't become incompatible with, with that? Yeah, exactly. It has a, this feature is disabled by default, but you can enable it. Okay. Also, I'm sorry, go on. No, please go ahead. Yeah. So one question that comes to mind. I, I took a quick view of the go of the dependency stuff. And perhaps I got this wrong, but does, does it only, does it perform this action for all dependencies? Or does it only perform them, the modularization specifically for, for the Go project itself with updating, like the, the Go packages that aren't part of the core SDK, but are still provided by, by the Go team as like extensions. Are you, are you asking if dependabot performs those updates for that? Or I just want to be Not dependabot the, the new 1.11 Figo model. Ah, cool. Okay. That, that, Sir Dave probably knows that then. Yeah. Well, I mean, I haven't tested it, but my understanding is that it takes a dependency file, doesn't matter which packages you have, and it converts into the modular. So like it's not just the Go, Go components. All the packages will be treated in different way. Oh, I mean, yeah, like the modular way, but I haven't tested it. So I'm still waiting for official 1.11 release. I don't want to play with beta. Yeah, definitely agree. That's a good question. We can ask the, the people built as well to see how, how it interacts with that new system and see if they have any plans for integration. So I have, I have another question that does, does this new, this new, this new thing that we're talking about, how does that interact with, with the depth dependencies? Does it make use of depth? It becomes absolute. So the depth, it goes away with a new approach completely. I mean, they have a transition. So you can convert when, when that new V Go modular way gets activated, it can detect that you have a dependency file. It converts it into the modular list. And then you from, from that point on, you use it for few in future. So. Okay. Interesting. It strikes me that, that we may want to play with this a little bit more before we actually make a call. I think it's definitely a good thing to play with. And then we also need to think a little bit about how bleeding edge we want to be in terms of the requirement on the Go compilers. Not everyone leaps forward to the newest version. The first week it comes out. So I, I, I think this is fascinating. And, and really interesting. And we should definitely look at it, but I don't think we're probably going to make a call today. And probably not immediately upon when one 111 comes out, we'll have to sort of get used to it a bit first. Does that make sense? Yeah. I definitely agree with that. And also the one, the one 11, my understanding was that it's not in, this is not enabled by default. So. We have time to try it out. Even we could still move to one 111 without bringing in this new paradigm. So is this the, I have a dumb question here and I, maybe the answer is right in front of me looking at. Number two, two 46 here. Pull request to 46, but is it clear how to use that for those of us that are submitting. Or planning to submit a PR. Or does it. Transparent. So it should be. It should be. So if you submit a PR that adds new dependencies, that your PR will include those. And then what the dependent bot will do is. It will, it will detect those new ones on the next time it runs. And so it will then check for updates going forward to those dependencies. And then we'll look at it. The next time it runs. And so it will then check for updates going forward to those dependencies. So, so you won't have to deal with the updates. So does that make sense? Oh, absolutely. And of course, so once that, and that of course will happen only once the PR is merged. Correct. Yep. Correct. Should make it pretty seamless once, once, once you merge code that adds a new dependency, then from that point forward, the dependent bot will take over making sure that that stays up to date. And we get security fixes for that dependency. And so forth. No weird side effects. That's good. Yep. Yeah. The, the only weird side effects are those we choose to merge ourselves. Well, we'll go ahead and we'll, we have a pretty big topic that we, that we want to jump on top of. So we'll go ahead and I give dependent bot a, a shot and see how it works. And with that, I want to push it towards the draft of the email. Proposing for the network service mesh work group. So, Ed, you have the floor. Yeah. Um, cool. So. Oh, excellent. Thank you so much, Lucy. Not much appreciated. So, um, basically, there's been, um, um, we're, we're sort of ramping up the suggestion we got from signal working was to seek to become a Kubernetes working group. Um, and apparently there is an email that you send to sort of kick that process off. And so I've been trying to work together with various folks to draft that email. And I wanted to sort of float it up to the broader community. Um, so that folks could comment and we could take a look at it. We can take a quick look at it here. Um, but you know, it's also commentable. Uh, if you're from the link, so you can definitely please go and add comments. We would like to send it out sometime early next week. So if folks could comment today or over the weekend, that would be super, super helpful. Um, do you want to just go ahead and scroll down. We'll walk through a little bit. The email. And by the way, much of the pros here was stored. Um, you know, document that he wrote. So you, you get the executive summary, the network service mesh is a novel approach to solving complicated L2 and L3 use cases and Kubernetes. They're tricky to address with the existing networking model inspired by Istio. You know, we map those concepts to L2 and L3 payloads. Um, and this is a request for a Kubernetes working group or a network service mesh that we sort of talk about the problem. Right. Yeah. We go through and talk about the issues with, you know, telcos, SPs, 5G, et cetera. Um, then we sort of point out that, you know, these are people who have an L advanced L2 L3 use cases. And the, what we have right now doesn't really work for them. That's the second paragraph under problem statement. Um, and then, you know, we talk a little bit about some of the the, you know, current generation assumptions about things that are going on and that, you know, these assumptions and Kubernetes networking implication work beautifully for existing app developers. Um, you know, it should not be changed in a way that makes them less useful for app developers. Quick comment about the telco emphasis there. I would also include some enterprise use cases as well. Cause a lot of. Could you, could you drop a comment pointing out those? Sure. That would be really super helpful to me. I'm aware of those use cases in the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. In the generative. And so I'm aware of those use cases in the generic sense, but I think you're a lot closer to those use cases than I am. Okay. Um, but I think that actually helps a great deal. Making it less telco focused. Um, because especially for Kubernetes. You know, there is a much bigger market at enterprise than there is in telco. So I think that's really, really good. We want to stress is people like Google and Amazon and Azure are basically going to lock down their CNI infrastructure. It's like their CNIs when you use their infrastructure so you can't really extend networking. So with NSM, you can extend networking. I'm not quite sure how to say that or even if you want to go into that rat hole. If you could add a comment sort of like with a little bit of a hum of few bars and a comment as to what you might suggest we put there and where. I'm not even sure we'll want to put it there, but I just think it's a bit of a, you know. Right, I mean, this is an email to request a working group, right? So I mean, if it's too long, I'm afraid that like not too many people will read it anyway. Like first couple of lines, maybe that's it. Yeah, that's absolutely true and Kyle had made that point earlier. I'm very open to finding ways to shorten it and that is partially why I put the executive summary at the front and why we have references at the end. So one thought I had was, and I know I mentioned this Ed when I reviewed this last week, but I'll just mention in the meeting was, you know, one thing we could do is we could link to what Frederick wrote for the detailed portion of this and then just have kind of a summary at the top and maybe that maps to what Sergey is suggesting as well. Yeah, I mean, so what it may actually, let me sort of throw this out there, which is, okay, so let's go back up to the top of the executive summary. So we have this executive summary. If we were to have, keep the executive summary and the, if you were to keep the executive summary and the use cases, not the use case, the executive summary and then have the references at the end. What would we want to add to the executive summary briefly to make it relatively complete? Because there are two dynamics at play here, right? The first dynamic with which both Kyle and Sergey are right about is that people don't read long emails. The second dynamic, which is also true, is that they often don't follow links from emails. And so we want to get the shortest possible thing in the executive summary that's sufficient and then probably just drop to the references. Does that make sense? Yep. And then I think probably one of the things we do want to be clear about in the executive summary is that we're not seeking to change the existing networking that works so well for our developers. I think that's going to be an important point. I think one thing we need to be very clear about as well is what is our goal with the working group. Because if we come across as our goal is to market and work service mesh, we're absolutely going to be rejected. And rightfully so. If our goal is about how can we find the issues in Kubernetes that we may run into in the various environments and add another actionable items into it, then we're much more likely to have this accepted. Yeah, I think that's actually a good point. And I think one of the other things that we need to sort out is where network service mesh is going to get really fascinating for a lot of folks is as we mature and we settle out what the API is, for example, particularly the NSM to NSM communication. Because my guess is that most people will probably just deploy the network service manager that we provide to their Kubernetes cluster, although they're free to write their own. But when you talk about external network service managers and proxy network service managers, there are going to be lots of other people who are going to want to write those. And so I think being able to document the architecture and the API is involved as it matures also is something that we want to want to call out as a goal for the working group. Yeah, Ed, Al Morden, I'm struck with the idea that what we want to replicate is exactly the description that they have for the current working groups, except for network service mesh. And that way it becomes sort of a very consumable thing that they've already read for existing working groups. And I think that covers the scope comments and so forth. What's in scope? What's out of scope? That sounds like a good way to approach it. Okay, so we can definitely take a look at that. There's a link at the top to the recent IoT Edge email. This is the one that was just successful with the IoT Edge working group. And they were relatively un-structured, which is part of why they basically talked through, okay, we've got these things going on. Some of the things we've identified are this. Some of the people who are interested are that. And then they didn't really talk a whole lot about what they were going to do or the scope or out of scope. I think those are both excellent suggestions. And it sounds like you've seen more structured proposals that might be good to crib off of. Is that so? Yeah, but not in CNCF. I mean, I started to search for that and then realized I better get back to the screen I'm supposed to be looking at. But that's, I mean, I think that would ultimately that's what people sort of want in projects, you know, goals, colossal non-goals. And what we're going to mess with and what we're not. I am so stealing the phrase colossal non-goals. That's one I love. I sell and write a document anymore without putting one sentence about that. I guess that the, you know, the real advantage is, you know, to be clear. And also I think the background information should basically say, you know, we've been meeting for two months and it's these 25 people and, you know, and here's our repo and stuff like that to really show that this is real. Yeah. I mean, one of the other things I actually do what I ask is, one of the things I noticed is they sort of called out some, some various companies that are interested in the working group forming. And, you know, and I'm curious if folks have opinions on being included in such a list for network service mesh. We've got really broad participation going on in the community here, but I'm sensitive about, you know, not calling those things out without people being okay. But I think you can list the affiliations that people put, you know, when they sign in each week, but to somehow imply that there might be official policy for some of the companies that we're affiliated with, maybe. Would it be, would it be there's been community participation from. Yes, exactly. Exactly. That stops quite a bit short of endorsement, but it is factually accurate. Yeah. And it, it works better when you're working with the public company models because public companies don't like for us to say, we're endorsing this. Oh God. No, I, I, you know, I've worked with Cisco's PR and AR apparatus. And in worse than that, I've had to work on cross company PR and AR things in other communities and whole, no, that's hard. That's really hard. Yeah, I agree with Fred to list the number of affiliations just indicates the broad base we're building in the community. It doesn't really imply how, because it's new yet. And some of those companies may not be quite ready to, to build a strategy around, you know, around this, this future and current project. Yeah, please keep it very soft or I'll get into a lot of trouble. So just to be really clear, and I, the reason I wanted to bring this up is I absolutely don't want to get anyone in any trouble. Right. You know, is the way I would phrase it is there, you know, there's been participation in the network service mesh community from, you know, from, you know, various companies. And so just simply keep it at participation. You say individuals from various companies. I absolutely can say individuals from various companies. Yep, no, I'm totally down for that. My main point is we have sort of two competing interests, one of which is overwhelming. The overwhelming interest is not getting any of it in trouble. And then the, the act, the, the second interest, which is also large, but not overwhelming is we want to show that we have a broad base of support. So it sounds like I need to go take another turn of the crank on this email to shorten it and make it more focused on goals, colossal long goals, participation from individuals from that kind of stuff. Does anyone else want to help with the drafting process? I am more than delighted to add edit rights to people who want to help with the drafting process. Yes. Okay, cool. I think you added me as well. I said that I'm definitely happy to review and provide some feedback he can do. Yeah, yeah. So the good news is the link to this is publicly commentable meaning anyone with a link can add a comment. So the, the question asked was who else would like to help drafting because such people need to have edit rights. So it sounds like I need to add Tom to that list and I'm delighted to do that. Anybody else. I'm adding your names to the, to the agenda as well. Cool. So speak up. I have Tom and I have me. Who else one sided to access. I'm going to stick Kyle down. Yeah, go ahead and add me as well, please. Okay. And, and I'm adding, and I'm adding Sergey. Anyone else wants to be added onto it. I'm happy with just comment rights for the moment. This is Al. Yep. It's all good. Okay. So let me go ahead and make sure I get all the, the various people added with edit rights. Yep. Cool. Add yourself to the list if, if you can't speak up and we'll make sure we get to you. Okay. Is there anything else that we want to talk on this particular subject or have we completed this? Okay. So I think I've now added. All of the appropriate people with Canada. You listed with Canada, right? So if you can't edit. Now it's time to complain, but not necessarily here. Okay. And as a reminder, the proposal date is on the 17th, which is one week from now. And. Okay. So. Is there anything else we want to discuss on the, on the proposal or are we good to move on? I'm good to move on. Okay. Let's see. So. We have the cross factor. X factor CNF. The reason it's an X right now is because we don't know how many numbers there will be. So now we have a few people from who were not a present last time. So we're going to move on to the next slide. And I just wanted to, to discuss about this particular idea from, especially from the, from the bulk group. And so one of, one of the things that I am hoping for that we can, you do to help the community understand what a CNF is. And not just for network service mesh, but any, any CNF. Is, I think it'd be a good idea to, to push towards a sort of like the toll factor apps that we have in, in Kubernetes where you followed a set of, you follow a set of heuristics. You have some number of cloud native function heuristics you can follow that, that help you to develop and maintain and operate cloud native functions. And so one of the things that I've been doing is I've been going through the toll factor app heuristics in, in detail. So that way I can, I can make sure that I, like I already have a good understanding with full factor app. So it's like, it's review for me, but I wanted to, to make sure that I get those finer points. And they actually have a website that you can go to for those that are interested. So if you type in toll factor app into your favorite search engine, the first hit should be toll factor, toll factor apps websites and they have a list and each of this item, there's a little bit of a discoverability issue, but if you click on the, on each main bullet point, it actually takes you to a full webpage that describes like, when they talk about, when they talk about a particular topic, then they go into full detail as to what, what they really, what they really mean by that. And so suppose that you had someone like, what do they mean by config store config in the environment? And then what do they really mean by that? Yeah. So at least it has the example up till factor.net, you know, or they say that they support concurrency and scale out by concurrency. Like what, what do they mean by that? Now, the reason I'm suggesting we don't go with just tool factor app that's vanilla is that the context is significantly different. So the context is around building scalable web applications. And what we're looking at is how do we build scalable cloud native functions and ultimately reach scalable cloud network services and an edge and an IOT and so on. And it may even come to, it might even be that we might have to create different in the long run and create different flavors of these type of, of heuristics. But yeah, what I would like, what I would love to see is some work towards, towards pushing on this and you know, if we can come up, if we can come in even with just some type of a draft for the ONS, sorry, not the ONS, the open source summit, CNF get together that we're having on Tuesday. Like I think that that would help push the, push the community along. So any, any thoughts on this? So I, I, I like the idea quite a lot. I'm actually very tempted just to keep the label X factor because it's really cool. But I suppose we have to come up with a number. We can keep it as X factor as well because if we're, if we're talking about edge versus IOT, the number is going to be different. You're going to have different concerns. And so if we keep it as X factor, if that's what we decide to do, then we do have an excuse for doing so. Like maybe these are the 12 IOT and the 15 edge and, and so on. And so it has to be X factor. My only concern is that people don't mistake it for 10 factor. Very limited use of Latin numerals in the current age. This is true. I'm not even sure they teach it in, in schools anymore. This is Watson. I liked the idea as well, the 12 factor. One thing is 12 factor came. Like it was kind of harvested out of a lot of pain from deploying and doing things. And it kind of took the best practices out with some, as far as CNF being kind of new, it seems like we're having to draw from maybe. And a fees and other projects like own app and some of these other projects for kind of a base for the best practices. So that's the only, my only concern, but if it, if we're doing it that way, like harvesting from those that pain, then I think there's a lot of lessons we can learn are and apply. Yeah, that's an excellent point. And to be clear, the 12 factor app was created primarily. Or it was driven, I'll say, not created, but driven primarily by one of the co-founders of Heroku. And so they certainly had the context to build such a, such documents and do a good job with it. And, you know, it's, it's clear that any one group, any one at this particular point in this space likely doesn't have that context yet. So that's actually a really, a really good point. But I think actually is even stronger argument for keeping the X factor, because effectively where the 12 factor app really settled out of, okay, we have to get this shit out from a lot of experience. Part of what we're talking about in the X factor, the X factor, the C and Fs is trying to distill that the experience as we discover it. It's more of an active process rather than a codification. I wonder, I'm thinking about this, and I'm thinking about like when you try to codify best practices and stuff, which is what this is effort to do. And in one of my other lives, I've tried to do some of that. And it gets to a certain point sometimes when it gets controversial, the issue with network, implementing pieces of the network is sometimes you have to break rules that other people think are good practices. And the same thing goes with what used to be called embedded in IOT. So we might be a little bit careful about this early in the project. I don't know, you know, I'm sitting down some too many guidelines. I'm wondering if we may be creating more. I don't know. I think let's just move toward the advantages of C and F, but not really sound like we're dictating people how to code their C and Fs yet. Anyway, we're providing a platform to make it easier to build C and F. I'm not sure yet we care what is inside them, you know, sometimes they may be internal to telcos and things like that and not open source necessarily themselves to, I don't know, it's just some random thoughts that pop into my head. You don't have to write them all down, but. I think they're, I think they're excellent points. And one of the, one of the reasons that I was looking at this is so when I was, when I was looking at early adoption of Docker and early adoption of Kubernetes and so on, these type of, these type of techniques and so on, we're not very, we're not very apparent upfront. And so like, where should you keep your configuration, you know, and I've, we're, we saw a deployments range from bake it into your image down, you know, to create a config server somewhere to inject, inject files in at these locations and so on. And, you know, the, so there's, we're definitely going to have a lot of different ways and a lot of experimentation. And what I'm, what I'm looking for more is rather than say, so there's a couple parts to it. One of them is not truly have like a code of things, but more, that's why I use the term heuristic is that it, there's, there are patterns that if you follow may make your life easier, but you absolutely should break them if it makes sense to do so. And the second part is now, now that I'm thinking about this in more detail of what such a organization would look like. It's, I, I do think that this would be a living document that we'd have to start off really early. Like these are the, you know, even just starting to identify, these are the type of benefits that we're, that we're looking for and, and identifying what other, what other organizations or what, or not organization, what other communities do to solve some of these issues and start pulling them together. Like for, so for example, when you start talking about scalability, even though CNFs may have different purposes and have different inputs and outputs from your standard 12-acre app, there's still only a limited way to scale things out. You either scale vertically or you scale horizontally, or you make your process that much more efficient so that you don't need to scale as much from number of processes and threads that you have. And so some of these patterns I think will apply regardless of whether they're CNFs or 12-factor apps. But there's others that like, that don't make any sense for the 12-factor app side as well. Like 12-factor app considers everything to be a resource and prefers port binding. And we are the port binding. And so it's, so some of them start to fall apart in that particular area and coming up with what the organizational structure of such a set of heuristics would look like and where do we draw the line? I think it's going to be, we have to be very careful with that, assuming that we decide to proceed in this direction at all. And just be very sensitive to the diversity of the community. Because I also think this community is going to be, even though it's going to be a smaller community, it's going to be a much more diverse community than the type of applications that you generally see in the standard 12-factor app world. And diversity in the sense of technologies, the technologies that we use and the ways we communicate. Does that make sense? So yeah, so I think even if we just start with, what are the benefits we're aiming for? I think that's a fantastic, fantastic example. And we can start linking some of those to, these are some of the paths that we've identified as possible ways to do that. Ask other people to contribute. And then in time, you're down the line, two years down the line, et cetera, we just continue to refine and say, these are the best practices that we found. And here are the problems that we found. And then we can eventually come up with something that says, this is how we build. You're a newcomer, this is how you, or you're a new employee at one of these companies. This is how you build. So, okay. Let's check the agenda. Okay. So we have about 15 minutes left. So I'm going to go over some of the other parts relatively fast. So we have Ian who is looking at, Ian Wells who is looking at the SRIOV on packet.net. And I'm quite unfortunate. There have been issues with SRIOV. He actually posted something on the IRC channel. So I will copy and paste that. But unfortunately, the standard builds or configurations of the packet.net. Do not, like they, they show that the SRIOV is there. But he cannot actually create any VFs out of them. And so there's the, there's the link. Taylor, I have the sense that I recall that Michael had a little bit of success giving us SRIOV working in the last couple of days. Do you remember? Yeah. And I had some request and to pack it as well about this. There was a thread on packet. I don't know if I added to this ticket. A link to that, but they, they do have some servers with support. Well, it's really, I guess, what level of support. So they see that for, go ahead. I was just saying, one of the things to watch out for, because it's confusing as all hell, if you're used to handing nicks over to DPDK is apparently there is a relatively new kind of DPDK driver that will share the nick with the kernel, normal kernel interface driver. So one of the things that Michael and I confused for days was we would try and bind to the VF and we would see a kernel interface for both the VF and the PF. And we were sure that something was wrong and I reached out to some friends at Melanox and they're like, no, here is linked to explanation of the fact that this is a real thing. So yeah, it's, it's tricky. Yeah. We were able to get the virtual interfaces to show up. There seems to be a lot of other settings. So it's besides what type of packet server you're on. It may or may something may or may not be enabled in grub. So you may need to do that. And then a lot of items, make sure you're not on an AMD machine. You got everything else right. And that may be the case. And that doesn't work. So one, one option that we have as well, and we'd have to coordinate with a CNCF to do this is that if, if they were to allow us to set up a long-term box. That's, that's running that we can, so we can set that box in BIOS to support it. And it sounds like they're like packet.net is willing to set it for that particular purpose, but they're not willing to, they're not either willing or able, I can't tell the difference at this point, but they're willing to set BIOS parameters when you spin up an on-demand system. So if the lowest cost option supports SRIOV, then one option that we have is to provision a couple of those. I think they run at $50 a month. So it's not, not too bad. And we've essentially reserved those for the use of, of our SRIOV testing. Yeah. I know one of the things also is apparently, Intel Nix work much more, I don't know if it's that Intel Nix work better for this purpose or that more people are familiar with how to make them work. It's a little hard to tell. But I do know that there, that we have people who know how to achieve success with SRIOV with Intel Nix. And so if we were going to stand up, you know, dedicated boxes for some reason, I do know the right people to petition for donation of Nix, if it comes to that. Nice. And that's something that, that the packet.net people have told me is that they do have the capability to stand up hardware for us. And if we can get them NIC cards, they should be able to drop them in as well. So of course we'd want to verify this more formally before we were to perform some form of a transfer, but initial discussions, it sounds like we can, like, special hardware in. Yeah. I mean, the other thing that I think we want to make sure we do is we're doing this is as we're discovering various issues to push back at, you know, the various, you know, whether various points that need to take action, whether it's Mellanox or DPDK or VPP or wherever to make sure that these Mellanox Nix are supported as well, because they are pretty common and pretty popular Nix. I know that VPP does really well with the ConnectX 5, but I have very sketchy reports. Actually I have no reports of this stuff here in packet about working with ConnectX 4 or ConnectX 3, because the people really, really, really interested are mostly interested in ConnectX 5. Yeah. One thing that would be good to know as well, and I don't know if we have time to get into this right now, because we have about nine minutes left, but to be understand why they prefer five over three and four. And so that way when we go, it'll help not only with our understanding in terms of network service mesh and implementation, but also it's data we can take to groups like packet.net and say why is ConnectX 5 so important? Yeah. So I think this is something that we should definitely, like it would be fantastic to have this working before, before we go over to O&S. And, you know, so that way we can discuss and maybe even demo to people who ask about it on the fly. But yeah, and I'll see about, I'll see if I can get some details from me and about whether or not the low-cost option with our Intel ConnectX would be good enough to just demonstrate the SRLV side. If we can use the CNX 4, probably the C2 medium is the best choice. The main issue that we saw was the network setup, if you're going to have more than one node. So it's looked like in the bios of the C2 medium, C2 medium and M2X large, those were the ones with CNX 4, it was already enabled. The network connection, though, if you have, if you spin up two different nodes and you're wanting them to talk, doesn't, is not set up in a way that's going to be highly performant. They use layer 3 by default in the way that they're connected. I've chatted a little bit with some of the folks at Packet, and apparently there is some knob somewhere that you can twist in order to get your servers, because every server has two nicks coming in. By default, they port bind them, and they do all their networking at L3. There are things that you can do to un-port bind them. There are knobs for that, and then have one of them be your normal L3 interface and the other one be an interface into an L2 domain of some kind. I don't know where those knobs are, but I have been promised that they'll not exist. That sounds like a lot of NIC-specific mumbo-jumbo. I think it would be nice to be able to do it with, you know, Intel. I hate to, you know, say Intel NICs or jump straight to MLX 5, which I guess is not a little easier to... Yeah, but please note, like the problem with them that we're having with MLX NICs, I am absolutely not convinced yet that it isn't a problem of familiarity at this stage, right? Which is, you know, with all of these SRIOV things, they're the magical incantations you do to make them work. Yes, that's the problem. Yeah, and it just so happens that lots and lots and lots of people are very familiar with those incantations for the Intel NICs, and I have a lot of trouble finding people who are familiar with them for the Melanox NICs. And so, or at least the MLX 3 and MLX 4. I do have people that are familiar with MLX 5. And so, I want to make sure we're really clear that it could just be we haven't figured out what all the magic incantations are, and maybe you have to sacrifice a sheep instead of a goat when you want to get Melanox NICs to work. I don't know. So with your contacts at Melanox, do you think... Because they have to have some form of CI testing around this or the hardware equivalent of it. Do you think that they could potentially give us a little bit of someone's time in order to work out if we have our magic incantation correct? I can reach out again. I've reached out to them previously through a couple of different channels. I can reach out again through a couple of different channels and say, look, you know, we now have two communities here that are going to be crucial to you, Network Service Mesh and the VNS Cinecom comparison who are both stock. And it behooves us to get unstocked. If we want to hurry them up, we should do it on Intel and say, look, it works with Intel. It doesn't work with yours yet. That would be fun and relatively easy to do except for the fact that getting Intel NICs is a little challenging inside the packet. Ah. Well, because my contacts in Intel are much stronger than my contacts in Milinox. Yeah, that makes sense. Jacob at packets been very helpful and responsive to all the issues that we've been working on for the CNS comparison project, as well as Cross Cloud CI. Ed also is very involved on the Cross Cloud CI stuff. So I'll reach out to them and ask maybe about the CNX5. I know they're doing updates there. And if there's other questions, I'm happy to talk with them about it. I do apologize. I have a hard stop that I have to drop, but always a pleasure to show them. Can you stick around 30 seconds for one more announcement? Sure. We have Docker images on the hub. Thanks to Kyle for working so hard on this. So if you've been waiting for some reason to deploy demon sets or do other types of integration, the group, correct me if I'm wrong, Kyle, the group is Network Service Mesh on Docker Hub. All one word. And you should be able to pull images from there. Correct. And we will be publishing those every time that we merge code in the master for now because of the pace of development. So in time, we'll work out, once we stabilize, we'll work out how to get stable versions onto the Docker hub so that we can stick a specific version and not delete the latest bleeding edge. But for now, it's going to be master. Master to the head of the Docker hub images for this moment. So just be aware of that, but I'm not expecting to cause major issues at this point, but it is a moving target. With that, is there anything else anyone wants to discuss before we complete the call? Okay. And again, thanks, Taylor, reaching out and asking them on your side would be fantastic. And just so you know, we have spoken with both Ed and Jacob a little bit, but I think it would be good to ask again to see if there's, because it's been a little while since we've asked. And so with that, thank you everyone for attending. And as always, we're available on IRC and network service mesh channel. You can also send us an email on to the network service mesh groups. And we'll see you all next week. Thank you. Thanks. Bye. Thank you. Bye bye.