 Cool. Great, so I think we have enough on so that we can go ahead and get started. So first, welcome everyone to the Network Service Mesh Meeting. It's been a while since all of us have been able to hop on since we've had LNS going on. So before we get started, let's do some agenda bashing. So if there's anything that you would like added to the agenda, please go ahead and speak up now. Michael and I would like to make a standing item for the performance testing part. I don't know whether it was there last week. It is not currently. So can we add that to the standing point for every week please? Sure. It's a VNFC, actually let me type it. It's a VNFC and F testing and benchmarking Michael Machik, Mikhail Machik. So I don't know where you want me to put it in. Yeah, how much time do you need? This is gonna be just to read out no arguments because Mikhail and me are now working on it. So it's gonna be read out five minutes is fine. Cool, let's put it first and then we'll continue on from there. Okay, so before events or after events? After events. Okay, hold on, I'll screw it up. No, thank you. Cool. Anyone else that wants anything added to the agenda? Oh, and Mikhail is actually on holiday today. So it will be me only. Cool. But I'll provide the readout where we are, thank you. In that scenario, so let's get started with the event. So our next big event is KubeCon Seattle. We have two talks with KubeCon. Something I've mentioned on the events as well is we also have the final mini summit which is part of the just a co-hosted with KubeCon and so we expect to see some network service sessions to find out conferences that are mini summit as well. The main item that we need to take care of there is gonna be a network service mesh demo and that's gonna be on the agenda already. So someone also added a podcast in the blog. So anyone who is willing to step up to help up with those would be fantastic. And are there any other events anyone's aware of that we should add to the, that we should add in? Not at this time. Cool. Also, if someone can share the screen, that'd be most helpful. Okay, the DNF, CNF testing and benchmarking. I'll let you go ahead and take it from here. Majic, you and Nicol are up. Actually apologies. Can we move my point a bit later? I have a parallel calling to pay the bill that I can't move. So can we move my point to later in the agenda please? Sure, we'll move it down. Okay, KubeCon, let's go straight into the KubeCon demo then. So our goals is, our first goal is we wanna have a basic local and remote cross-connect by November 7th which lines up with the DNF, CNF comparison needs and a hardware cross-connect by December 7th. Nice to have included streaming topology visualizations, auto healing and a point to click, run the demo yourself which conveniently will be a static website over preferably over Hugo, to have to be Hugo but that would be the most helpful. And so Ed, do you wanna talk about your ideas on the KubeCon demo? I believe we have more context on this. Yeah, so first of all, I just wanna make sure for the basic remote, local remote cross-connect stuff that would include local MIF, MIF cross-connects and sort of remote cross-connects over BX LAN minimally. And I wanna make sure that's actually, sounds like it's lining up with what you guys need for the DNF, CNF comparison stuff. Because I think we very much like to make sure that it lands in your hands in a timely manner. Sorry guys, I'm back. Can we have the VNFC and the folks here? I think we do. Hi Ed, yes, I don't believe Taylor's on the call. We have Watson on the call as well. And I think that that does line up the remote cross-connect is something that we were looking at. Okay. Our, I think our deadline for the initial benchmarking was going to be November 5th. So it might be overlapping timelines there. Okay. So probably worth noting, do you mind just noting the timeline there for what you're expecting so we can sort of try and steer those two together as much as possible? Sorry, this is about the KubeCon demo that I started to assist with the team with Mikael or is it something else? Ah, so there are two sets of things going on here. One is the VNFC and FDEMO work, which you're assisting with, which is awesome. And then there is time to build a demo for Network Service Mesh for KubeCon. There's a strong ambition to bring those two together so the VNFC and FDEMO can run Network Service Mesh. And so in order to try and put some structure around that, we're trying to figure out what would have to be delivered by when in order to make that work out. So we'll have our... Sorry, so we have demo one and demo two, yes? Right, right. And we're trying to get those two come together. So can we call it like that so that we avoid confusion? Because right now it doesn't look like we have a different ones. So can we have a demo one, demo two, both titled and then deliverables for them. And then you say that ideally if they merge, that's great so that we have a demo one plus demo two. So I'm fine with that. I would suggest we call them the VNFC and FDEMO and the Network Service Mesh demo so there's no confusion. Okay, fine. So let's do that because that's not how they call it right now. The one thing I do want to be very respectful of is the VNFC and FDEMO comparison folks, while they're certainly very valued members of the Network Service Mesh community as well, we don't steer what they're doing out of this meeting, right? And so it's less of a us managing the unified schedule and more of a communicating between communities to try and get schedules that mesh. Does that roughly match your understanding of things Watson, Lucina, et cetera? That seems about right. So we have a KubeCon demo for Network Service Mesh and we have a KubeCon demo for VNF and CNF comparison. So these are two demos. Can we call them exactly like that? Because current text is confusing. Right, so I think this is being called as the KubeCon demo for Network Service Mesh and... Okay, all right. I'm gonna type the part that I'm playing in. So KubeCon demo for VNF, CNF comparison. So that's the semantics are the same. Sorry, Lucina, I went ahead. Sounds good. Part of the reason I wanted to talk here about goals and dates is twofold. One is I wanna make sure that we have mutual understanding in the hopes that these two demos can come together. The other one is obviously this is a community. This is something we figure out as a group in terms of who's willing to work on what, what people find interesting and so forth. So I took a swag here for the things that were interesting to me and the timelines that would be interesting to me. I wanna make sure we get input from other folks on the call about what they think is important and what timelines are important to them. Understood. So I know we've got a bunch of folks who've recently popped up from, you know, in the community, people who are looking for things that they would wanna be able to share but show by KubeCon and other contexts or, you know, who are looking for things to work on who are curious about some of these things. I think that is useful. Rikowski votes no on procedural vote for Kavanaugh. I think we've got somebody who should probably meet themselves who's on a procedural vote, but I don't know who yet. Okay. Cool. So anyway, feel free to speak up, you know, if you wanna reach out offline, if you wanna think about it and add something to the agenda for this next week, that's all good too. And then there were a couple of things I'd sort of marked as nice to have. And the reason I sort of wanted a list of those is there are certain things that we could do that would be awesome, but that are kind of orthogonal, you know, in that they could be worked on sort of while other things are going on without disturbing them. And some of those are things like streaming topology and visualization, which would be kind of a cool thing to be able to show as part of a demo so you can see the links arise and pass away. There have been a lot of conversations people have had around auto healing, which can probably be best summarized as what do you do as a network service mesh when one end of a connection goes in, you know, the network service endpoint goes away to restore service for the thing consuming that network service endpoint. And there's a lot of interesting thoughts around there. And then the point and click run the demo yourself, always sort of the realization that, well, if you look at the pieces we have here, we've got the cross cloud CI stuff that can basically go and start the demo up for you. We've got packet, which has APIs that are running APIs that are accessible as REST APIs. You can poke those things with JavaScript. So in principle, you could probably put together a static website page with some JavaScript with a big button on it that basically kicks off the demo and shows you something pretty as the demo runs on packet. This is obviously a stretchable, but it's the sort of thing that people could work on if they were interested in it in parallel without in any way shape or form impacting some of the other things that we're trying to get on. So a lot of the nice things to have are sort of things shopping for folks interested in getting their hands dirty. Ed, this is an answer. Sorry, I was, I just remembered there was one item before we were thinking of talking about in depth sort of the policy, Kubernetes policy interface and how network service mesh interacts with it. So maybe if you have time, we can talk with us today or the upcoming. If we could add that to the agenda, that's awesome. It may be a little bit of a longer conversation, so we may want to put it a little further down so we don't crowd out other items. But I definitely, I know you brought that up last time. Thank you for raising it again. It's a totally worthwhile conversation to have. Thanks. All right. So anyway, I think it's sort of that drum enough. So Frederick, take us on to the next item. Sure. So we now have data plan API work that is currently being done. So there is work being done, I believe by both Ed and Sergey. So you both have been working closely with each other to help produce a data plan API. So I'll let you both continue on with that. You wanna go first Sergey? Yeah, I mean, yeah, I could briefly talk about my part. So basically, if you remember, we had an action item of refactoring API. And at that time, it seemed to be a good point to look at the data plan API as well, because I mean, they're kind of related. And so Ed and I, we started talking and basically to be able to complete their API refactoring, I suggested a couple of points, which I think at picked up and built upon that more complex structure. So right now, basically the data API let's say the simple data API data plan controller, which is currently merged in the net mesh, it can be used as a reference model. It pretty much does everything as it's any data plan controller would need to be do from the control plan perspective. I mean, exchanging the liveness messages with the NSM and all the nice things. So the only thing is missing is the final piece, which Ed hopefully adds soon. And then we'll be able to complete the data plane. Yeah, I've got one patch that's in progress that I'm expecting to land this weekend, that sort of finishes the sort of, I think the immediate set of refactoring stops for the API, the data plan API. And then I think at that point, it will be in good shape. Obviously, people will continue to discover things that can be enhanced about it. But I think that the basic structure will be quite reasonable. I've got a question. Do folks have an interest in maybe doing a review of that API here next week? So we can sort of talk through it structurally and get community input. What is the exact scope for this API? Sorry, I haven't been paying attention to the data plan API. Yeah, so this is simply the API that the network service manager uses to talk to whatever data plan is handling the cross-connects. So this is basically programming the data plane functionality. Exactly. So if I am a network service manager running on a node and I need a cross-connect between some pod that wants a kernel interface or MMIF and some negotiated tunnel that I've negotiated with a network service manager on another node, I need a cross-connect that is composed of MMIF to VXLAN and obviously bidirectional. And so I need a way to communicate that to the data plane. And of course, you get other niceties like what exactly, what mechanisms can the data plane support itself? Those kinds of things. So there's some details in there. But the basic idea is how does the network service manager ask the data plane to create cross-connects? Okay. And is it following some data model if I can ask? Yeah, no, absolutely. And that's what's actually been landing in the patches that have been going into that Sergey and I have been working on that have been landing the last week or two up in the repo. And then there's a sort of a document that got a little bit started and we'll probably get cleaned up to match what's in the code here. Talking about the data plane API in the context of the other APIs. Okay, I've got two more questions. So are you actually using the data models that have been defined for this functionality elsewhere? Specifically, I know about two sources. One is IETF and NetMod Working Group, which is doing Young, but it can be translated to JSON, Protobuf, you name it, because the Young is the strictest and the other guys are less strict. But it would be nice if they follow so that it's easier to do machine-based translators in the future and also leverage, make sure we don't miss any functionality. And then there's a number of Young models defined in IETF, but then there is also another thing called OpenConfo, OpenNetConfo or some OpenYoung thing, which is also, I think it's an open source community-based effort to define network-centric data models. And I'm sure L2CrossConnect is there, so is L2Bridge, IP4 and IP6. We actually aren't, but there's a fairly good reason. So here's the thing. What you said, it makes a ton of sense if I'm talking to physical devices that do NetConf Young things. Yes. It makes a ton of sense. Yes. This is part of the reason why in the API document, one of the things that we talk about is the distinction between network service mesh and the abstract and network service mesh in the particular as it involves Kubernetes. And one of the reasons that we make that distinction is because in the abstract, network service mesh has the network service manager and network service manager API, and it has the need for a network service registry. That's it. How a network service manager manages whatever thing it needs to manage is not the business and the abstract. So the data plane API that we're talking about here is very specific to how the network service manager on a node talks to the v-switch on a node. And in that context, we aren't actually in any way, shape or form particularly assisted by those... Okay, understood. So you're doing a top-down approach. I'm talking bottom-up. Do the two meet? Yes, they do. Essentially, the network service manager has a set of responsibilities in the abstract, which can be met in any manner that makes sense for the network service manager. In the Kubernetes case, which is what we're focusing on, we have sort of fleshed out what happens to the network service manager in Kubernetes. If I had a network service manager that was managing physical network boxes, then that can be fleshed out in whatever way makes sense for the person running that network service manager. Okay, so can I see the top-down and bottom-up flow in the repo? I'm going to give me a pointer. Yeah, you can... I'll point you to the start of it, but please note the document is not quite complete yet. So there is a point where it's... No, no, no, code, code, code, code, no document, code. Yeah, so I can point you to various places in the repo, but you will find, for example, API definitions. Then you will find controllers that do things, et cetera. So it is not going to be presented in a way that is going to be terribly complicated. No, it's okay, it's okay. Whatever you can give me, I'm just now interested. Because I've done bottom-up and bottom-up up to abstraction through levels of Yank and JSON focus system. You're going top-down, I just wanted to see how that... You guys are going top-down, I'm just interested. Yep, yep. And is it in plugins or I'm looking at the repo? Is it an API repo somewhere? No, no, it's the network service mesh repo, the legato... I am there, I'm there. I'm in the network service mesh repo, GitHub. Yep, yep. So I'll get you links for that and try and give you good points or just jumping off points in that in the code. Okay, cool. So a document that actually makes a reasonable attempt at trying to explain it from the top-down. So maybe much more comprehensible. So did anyone have an opinion as to whether you want to review the API here next week or whether that would just be a waste of the time on the call and you'd rather just have a document, you can go read offline? Personally, I thought it was very useful. And it would be a good idea to review it. I would like to review the document first. Are you talking about a review during next week's meeting or are you talking about a separate event? I was talking about a review during next week's meeting. Trying to schedule other meetings is very complicated for folks. Yes, I'm plus one on that. Yeah, plus one here too. Yeah, they were guys plus one too. Me as well. I expect there would be a lot of interest. So, okay, cool. Yeah, so that's simply added to the next week's agenda and we'll let that as the main item. Okay. Yeah, it would be great if people could review the document beforehand because I mean going through in the details will be very time consuming. Can you put a pointer to the document? It's at PR. Yeah, we need the link. We need the link and we go to Q&A next week. Right, but here's the thing I would ask. I'll go ahead and that document is still in progress. There's a patch dropping this weekend of the document needs to be cleaned up. So I will drop out a note to the mailing list when that document is in reasonable shape pointing people towards it so that they can productively review it. I wouldn't want folks to go. That's fine. So just let us know that when it's ready for review and we review before the call. Ed, just a question there. Are you adding, you're adding this stuff to the NSM API document. The document I was intending to bring up snuff, yes. Yeah, so there is a preliminary version of the NSM API in the repo right now on your docs is nsmapi.md. And you just, of course you can browse to it because all the documentation gets rendered. And, but that is a little bit incomplete. You're kind of, but I think that's what Ed is, is, you know, kind of like filling in the details. That's the intention, yeah. And that would probably be what we would use to drive the discussion next week. So yeah, all right, cool. I'll add that, the link below Ed's comment if that's okay with everybody. Awesome, please do. It's actually, there's some code there too. So is this the code that we will look and have a look as part of the documentation review? Yeah, so this is the document that I was intending to review. And effectively, it's easiest to talk about things in their natural form. And so a lot of those things are in, you know, basically talking through proto files. But as I mentioned before, that document at this moment is incomplete. So if you get to a point in it where you're like, where you basically feel like the document is incomplete or has become less coherent, yes, that's true. We will bring it up to snuff for next week. No, that's fine. That's fine, Ed. I mean, this is a work in progress, but I apologize for barging in, but they actually, I already read it or scanned it, fast read it, how they call it. It looks actually very comprehensive. Thank you. It's getting there. Yeah, I think you are too shy. I think this is cool. If you read it all the way to the bottom, there are a couple of places where it just doesn't quite fit together. I already did. You get like two paragraphs and paragraph one is good here and paragraph two is good. It's good, it's good, it's good. I like it, thank you. Cool, so awesome. So I think that's it for the Data Plane API from my point of view. Do you feel like we've talked about everything at this point, Sergey? Yeah, basically it would be great if people start to reviewing it more actively and provide the feedback because it's extremely important. I mean, definitely it can be changed later, but it would be nice to have a better start with the more people chime in with the ideas, suggestions and all the details. Which is Sergey's very polite way of saying, please follow up God, don't make me refactor the API again. So Sergey and Ed, what is the best way for us to provide comments online? So over this weekend, I expect I will be pushing a patch to update the API in line. And then either as part of that patch or as a follow-on patch to that, I will be pushing updates to bring the document in line with what's actually true in the code. But that's not what I meant. I meant, in Garrett, for example, in the AIO project, when we have patches, we can comment in. You can't do it here. Give-all-be-pull request can be commented on. Yeah, you just hit the review button and then you can add comments that's sort of a little bit like Garrett online. Yeah, it doesn't present them in the same way, or am I just not? No, you're just having the adaptation to a new tool thing. You can add comments. You can also add comments in line in the code. All right, okay, I'll play with it. Thank you. Yeah, yeah. I mean, it's not exactly like Garrett, but it's reasonably close, but it's different enough that it's unfamiliar. Mm-hmm, mm-hmm, okay. Cool. Awesome. So don't have too much time. We still have a lot of material to cover. So I'm gonna jump to the next topic. So architecture review, we've already spoken about the data plane API. Is there any other architecture review items that are, that we're currently looking at that have not been discussed? I would, the same API document we put into the section on talking about abstract components or never service mesh is actually, I think pretty well done for my point of view. So if folks want to comment on that because they think something should be different, then that would be a wholly reasonable thing to comment on. One of those things that I think it's complete, that means it doesn't necessarily need it as complete, but it does mean that it's certainly something people will go look at. Okay, is this already been pushed to GitHub for comments or is it still in the Google Doc? It's actually already part of what's been merged. So let me go ahead and get a link to exactly the section that I need. Okay, so this is stuff that I merged last week then, specifically. Yes, yes, so it's the never service mesh components in the abstract is one piece of it. Let me go ahead and stick that in a meeting minutes. Yeah, just to, just for people to be aware just because something's merged to the repo and if it's a document or spec or so on doesn't mean that that's gonna be the absolute final version. Do you find a hole or a bug or something in there? Bring it up. You know, we want to iterate over this thing over and over again until we have something very solid. So, yeah, so the never service mesh in the abstract and then talk about the APIs in the abstract or something like that, I think are in reasonable shape. Certainly the components of the abstract is the APIs in the abstract is sort of getting there. Actually, let me retract the API in the abstract that's going to change a little bit as we fix to do the things. So it definitely components of the abstract and that I think will help understand like part of the reason I did that was to get a sense of what are the places where it's okay to stick Kubernetes isms and what are the places where it's not okay to stick Kubernetes isms. So, cool. Okay. And in a bigger scenario, looks like the links have been added. Moving on to the next factor, CNA. So I've added a GitHub repo that has some initial documentation that I've written. What I've done is I've taken the tool factor app methodology website. It's under a open source license and I've worked it and I've modified an initial version of it. And this is something I just wrote on the that I did while I was on the airplane back from Amsterdam. And so this is just really rough than initial version. But what I've done is I've created some guidance for how people should build CNFs. And so the most important parts are, I'll put the Table of Contents link in as well. So the most important ones are in the Table of Contents there's some additional ones that I added. So they use all of the same ones with the exception of port binding and which is number seven. And I've added in five items. Number one, do not require kernel modification or modules. Number two, explicitly say payload types that you consume and produce. Number three is list the mechanisms, interface mechanisms that your CNF supports in order of preference. Number four is bind by payload and mechanism. So in other words, don't buy dynamically to another CNF bind by the payload and mechanism that you use. And fifth is to treat metrics as event streams. So that way that the Kubernetes environment can, then you set up for logging and consuming, you can make use of things like cometius and so on. So one thing to note about this is these are not specific to Kubernetes. They're general, how do you build a CNF that can run in anything that is Kubernetes like? So if someone decides to bring in Meiso or someone creates a new platform in the future, it should be easy to port these CNFs from one system to another. And they're also not network service mesh specific either. So they should also work regardless as to what you're using in order to manage your, in order to provide that control. So for those who are interested in that particular area, there's a couple of things that need to be that I need to do with it. Number one is it's not really easily consumable. So I'm going to change it. It's using some Ruby based server. I'm going to get rid of that Ruby based server and migrate it over to Hugo, which means that we can then use the same infrastructure we use to build our websites for network service mesh.io and do all reviews and so on. It should be a lot easier to work with. And that'll also fix the issues around linking because this thing has a very peculiar way that does links that is not compatible with GitHub. So if you look at the table of contents and try clicking on the links, you'll see that it breaks. Although if you go into the actual repo, you'll see each one listed there. So if anyone wants to provide a little bit of help with number one, moving over to Hugo or number two, going over the documentation right now, a lot of the ones, especially from month or 12 are still 12 app specific. And it would be good to rewrite into some of those sections to be more CNF specific. And so any help I can get with this would be fantastic. And of course, any help with the main topic itself. Like there are maybe items that I left out. Like this is very rough. So... Frederik, I've got two questions. Sure. Okay, right now you're up to number 17, yes? That's correct. So we... I'm just looking at the repo. The last one is metrics. And this may grow depending on the merit that we agree within NSM project, correct? Or where are those 13 to 17 metrics from? Are they... Is this your own thinking? Is it from somewhere? Is it from some other community? Some other what we call in ITF best current practice? That's a great question. So the area that I've... So right now these primarily came from a series of things. Some of it are from my experience. I'm working on network service mesh and working within the container space. Some of it is based upon conversations I've had with CNCF, like people like Dan Cohn and Arpi Joshapura, the direction that they want to take CNFs as well. Some of it have come from internal conversations I've had within Red Hat. So there's a variety of different places. And so I've been thinking a lot about what are the core minimum things that you should do in order to help someone who's building a CNF actually build something that will be easier to orchestrate and be easier to consume as an operator and manage as an operator. And so I don't talk about in this particular document but I actually have to read, I actually see like three levels of cooperation that a CNF can have. So one of them is you no longer require any custom ground modifications or modules. So in other words, you can actually run it in a container. So that's like, you can say like, that's a bronze level. Like a silver level would be like your CNF now scales horizontally and the gold one would be it scales horizontally and it also can be upgraded and downgraded gracefully without breaking any infrastructure. And so part of the idea is to provide the guidance so that CNF operators can, sorry, CNF developers can build CNFs that will interact better with their environments. And the second one is to also give the operator some level of confidence where if certain guidelines are met, they know what type of risk they're taking on. Like they know, like if they use, like using the bronze silver and gold, if you take a bronze one, you know what level of risk you're taking versus if you take a gold one. Understood, understood that's it. So I have a few suggestions if I can. Sure. Because it looks like you actually spent quite some time to think about it and also explore both your experience, your colleagues, friends and also references. What would be really, really, really useful. And I know there are other folks agree is to actually back up each of those points and you already have content there with informative or normative references. So this experience comes from somewhere, whether it is a compute world, cloud, VM, networking, old or new or some operational experience or development experience, it should be called out to motivate, to basically support this item to be a requirement. And 12 factor from what I understand are actually quite strict rules, but some of them are applied to the container apps today in a less strict manner. So I don't know whether we want to express the strictness in a sort of mass shoot and may like we do in ITF. And by ITF, I mean Internet Engineering Task Force, which is the standard, the traditional organization, SDO that has been used to basically build the Internet in case you guys are not familiar with it, just FYI. But there are some other standard bodies I understand like Etsy and others that do not support some strict requirements like MAST. But it may be good to have an indication of at least for the factors over 12 from our perspective to express the, you know, the strictness of requirements. So that's comment one and comment two is, I would love to work with you to keep defining those X factors. And I really like the name, thank you. Cool, yeah, any help I can get from anyone? And if you know someone who wants to help with this or could be helpful, like definitely bring them over. So I'm already shopping to surround with a couple other companies. I can't keep the names out of this particular point yet until I get confirmation that they actually want to participate. But, you know, I'm sorry to shop this thing around as well, at least the idea of it. And so, yeah, I want to be able to take this to various vendors and help give them that guidance. So anyways, I won't take up any more time on this unless someone has an additional comment that they want to provide. And feel free to ping me as well if you want to help. Thank you. Okay, so we have a quick toss up in this scenario. I want to make sure that the last six stuff gets in with the KubeCon demo. Yeah, you had something that you were going to read that was the five minutes, but also want to make sure that we have enough time for the discussion on Kubernetes policy. And so... I can give a very quick update. It's going to be three minutes. What about that? Cool, let's do that. Thanks. Okay, so I'm going to be speaking for me and Mikael and partially of Tyler. So specifically focusing on the performance part, VNF versus CNF. So following the discussion on the last NSM call, Mikael and me connected twice. And we have actually reviewed in more detail his current results. His current results in the packet.net are somehow off compared to what we are reporting in FDIO CISET labs that are maintained by Linux Foundation in open source. And we are, Mikael is now working on aligning those by reducing the configuration from four cores to one core and making sure that what he's watching in packet.net, what he's seeing in packet.net is aligned with FDIO. The difference between the labs is packet.net is a hosted environment with some switches connecting and things connecting the hosts that he's renting. In FDIO, we have a full control of a complete environment, including the wires and there are no switches. So we consider our environment to be 100% under our control. We see, we consider packet.net to be 80% under our control. So that's one. So that's one piece. Second piece, we're actually going to be sharing, we're going to give Mikael another server. He's already using one based on ad, thanks ad for organizing the rental. And the server should be coming online today. We have some RMA hardware issues, assuming this is the case. As of Monday, the test bed will be allocated to Mikael fully under 100% under his control so that he can do his magic. And basically the idea is to progress on two fronts in parallel, using exactly the same software stack for both data plane and orchestration. And that's the FDIO Sysit, two note Skylake based test bed. And I will type that in. And also the packet.net machine. And I'll type the references in a moment once I stop speaking. We're gonna talk again, I think on Wednesday next week. And we have a meeting with Taylor and the team on Tuesday to review the demo scenario and so on. I will be sharing with the community here on the following call. That's it, thank you. Unless there are questions. Three minutes. Yeah, that switch thing makes a big difference. So glad you discovered that, Masiek. Well, you and me are in a very small group that believe that if there is an active device between two DUTs they can actually, they can actually, what's the words? Introduce some impairment or distortion into the measurements. And that's what we're exactly gonna capture. However, saying that packet.net is an amazing platform because it allows people to reproduce it. And by just booking the thing by a minute or by a quarter of an hour and running the tests. So the idea is to progress on both using FDA-assisted testbed as a reference and try to come as close as possible on the packet.net and explain the discrepancies. So that's the goal and we are quite confident that we'll be able to get there. The challenge we have is, main challenge we have is time because we don't have that much time. So hopefully we will get a demo working. Most likely it will be something interesting. But I don't think it will be an on-up VCPE use case which has been originally requested by the folks because it is just complex and it's impossible to get in time. But let's see how close we can get. Thanks. Thank you. Thank you. And I will now add Bellings. So sorry, very quick question. This is Ramke here. So you mentioned on-up VCPE use case. So if you're not doing that, is that the thought process to make it simpler? I apologize for making the reference. I would like to remove it from this recording. We're gonna- Thank you. Tanking chip. On-up use case is very complex. It involves a very long chain of devices including VCPE, including, sorry, there is some shouting in the ground. Sorry, there's some dark. This is Taylor. I can speak to the CPE use case. What we're planning on doing is using the chain. I don't think it's relevant to the demo because we're not using the VCPE use case. Thank you. Let's put Taylor talk. You talk, watch it. There is one possibility. This is Ramke. So rather than CPE use case, we could go for something even simpler like the virtual firewall load balancer. I mean, really much, much simpler of DNS. CPE I understand is much more complex but we can chat more offline also on that. Yeah, I'd like to hear Taylor's comment as well because you may have some useful in this scenario. Sorry, Taylor, I cut you off. I apologize. Please go ahead. We started by rebuilding the ONAP use case and we've stripped out. We had a fork that's on the CNCF fork and there's a repo called ONAP-DEMO. It's a fork and we did a lot of work on trying to make it repeatable by others and decided to start over. And that's what the CNCF CNS repo is with all the comparisons that McKell and everyone is working on. We're not gonna have ONAP for the Seattle demo. We will contribute the network functions, the CNS, as well as the VNF updates back upstream. I don't know if we'll ever actually use their demo. Specifically, we'll probably help them. But for what we're gonna do is recreate some type of chained network function use case that may be based on the CPU use case from ONAP. We have most of those components. We've actually rebuilt all of them as containers except for VGMux. That's the only one we're lacking. But if we come up with a different use case or the actual test scenarios that we run through, that's fine. It's mainly chained CNS and VNF that we can compare on Kubernetes and OpenSat. Just one additional thought is that here is where, better than CPE, if you look at DNS or virtual firewall or virtual load balancer, it's probably much simpler. I know CPE is pretty complex. So... That's the... Yeah, we... Although I would say... We have for... Go on. Sorry. You can finish, Ed. Go ahead. Yeah, what I was just saying is that's a good suggestion, Ron Kee, although the problem with the DNS is DNS looks like every other app. So it's not really... It's not really of the kinds of things you have to be in addition for a cloud-native entity for because it doesn't really deal with packets. DNS is a protocol, it's sort of an application on top of networking and that perspective. No, that's why I said there are DNS, virtual firewall and virtual load balancer. So I was thinking actually more of the vFirewall and vLoad balancer, DNS. But it's sort of the same workflow inside the own app. We can add virtual firewall also, yeah. I think that's actually probably something good to think about. I think mostly once we can get to the point where we can measure a chain of the CN apps, which I think is where Taylor is going, then moderating what is in that chain is quite doable. The probably... So I think they're probably on the right track in general. I do want to make sure of Ron Kee though, that we have enough time here at the end for your other topic. Sure. Yeah, I agree. Totally, please add, we can just add virtual firewall also. I can all, yeah, virtual firewall, small thing. VDNS vFirewall, yeah. VFW, yeah. Ron Kee, if you'll follow up with us after, I'm happy to talk about some of them that we've done. We actually started with VDNS and we went through that and we started building out the different workflows and we decided to go with what Ed was just speaking about, make sure that we can chain the different network functions and focus on those as building blocks, then we can create the different workflows. So right now we have several different comparisons as we've built up. We're also doing this baseline performance test, which my check was talking about earlier to make sure that the very lowest level of the simplest case we can validate the hardware, we are gonna continue to add workflows and we could look at something like the VFW and stuff like that that's more user focused and then we also won't workflows or test cases that are very specific on the network performance. We're trying to make it capable to show all of this. Makes sense, yep. So we don't have that much time left. So question is do we have enough time to cover the Kubernetes policy? So maybe we can kick off the discussion and perhaps have it as the key agenda item for next week. How about that? Yeah, I think that should be okay. Yeah, so okay. So what I wanted to sort of understand was essentially Kubernetes as a network policy resource, which all of the other things. So my understanding in terms of the Kubernetes network policy is so the policy is designed to work specifically with the network plugins that are part of Kubernetes that can get ingestment through CNI. So what's interesting is the network policies themselves are not CNI specific. They're actually something that the network plugin would have to ingest and apply. So one of the things about the policies in that scenario is that they describe what pods can be connected to in terms of like you have a packet that's coming out, there may be an egress policy that says which namespace and which pod labels can this current pod connect to? Or an incoming packet comes in, where is the source of that packet? And based upon the namespace and pod labels and the lab ports and so on, where packets are allowed to be sent to to be received by the pod as well. So you have both sides, an egress and an egress policy. Those don't really make as much sense when you start looking at it from how network service mesh is doing because what we're doing is a dedicated cross connect from one consumer to an endpoint. And so the standard network policy itself is not really applicable in that way. And this makes even more sense when you start looking at things like shared memory. Like if you put a shared memory, like an MIF between two containers that exist on the same node, there's nothing in between them. And so it becomes impossible in that scenario to even enforce any policy as well. However, where we can enforce a policy and we haven't thought much about this is on the initial connections in the first place. Like we may have a policy, we haven't thought much about this particular path but we can think about where, how do we ensure that this particular endpoint should even be reachable by something that's requesting it? And like how do we handle admission control in that particular scenario? And one answer could be from the CNF side where the CNF itself can also can handle some admission control. But simultaneously even reaching the endpoint in the first place could be something that our service mesh can help facilitate. And so in that sense the standard Kubernetes network policy doesn't really make sense for us but there is a network policy story that we likely need to address. And I think that this is a good kickoff for what such a thing would look like. Does that make sense? So one thought here. So if you really look at Kubernetes network policy it's sort of really a security policy right? So admission control, whether that prefix should be part of essentially whether that prefix should be processed or not. Simple admission control, security. I am with you on that it is different from service mesh but I'm also thinking of it slightly differently if let's say the network policy where to have a next hop. It's basically rather than just an add simple admission control go back and say, hey, here is a bunch of prefixes and here is my next hop. Then it sort of gets into the network service mesh paradigm. How you connect up these network functions. Isn't that true? Yeah, and that's something that we're looking to address I believe through wireings. And add correctly if I'm misinterpreting this. But yeah, with the network wireings that we're setting up in terms of how they get chained and what the next hop should be. There is some control that can be handled in that scenario. But perhaps what you're thinking of is a more advanced use case where you might have an endpoint that consumes a, that consumes some form of payload. And then they can select what is the next one. Maybe some of them bypass a firewall. Maybe another one has to go through the firewall. And so you make a distinction based upon some information which could be a header or some other mechanism to determine which one of these two paths can be taken. So I think, is that closer to what you're thinking? Okay, so other is also whether, so right now in our network service mesh we do have sort of a, I mean, how we build the chain, right? Correct, so basically we say, hey, here is the one service and then how do I connect to the next service? For example, firewall to DPI, right? So how we build the chain. So I'm just wondering whether this construct can be leveraged for it. I mean, basically the network policy itself, but sort of saying, hey, here are the objects and here is my next hop, right? So basically my next hop in the service. There are a couple of things here. So effectively what I think about network policy in Kubernetes, what network policy is is it's selecting a set of pods who are to be isolated and it is providing conditions under which things are permitted to reach them. Because the standard contract in Kubernetes is every pod can reach every other pod at layer three unless there's a network policy that tells you to isolate the pod. And so that's basically what it is network policies in Kubernetes are policies about isolation of pods. That said, they do a very clever thing, right? And the clever thing they do is they select which pods are isolated using selectors on labels on the pod, which is very, very clever. And then they use the similar selector mechanism to tell you who is allowed to reach isolated pods, which I think is also very clever. I would expect, and so that those are very smart things. When you bump up to something like Istio and sort of classic service mesh, they do similar-ish kinds of things in terms of selecting which pods to direct TCP connections to or direct HTTP request response messages to in classic service mesh. So, and they do that in virtual hosts right now. Where I'm currently thinking is that network service wiring sort of evolves to fill some of that role in terms of the higher level policy and streaming. The problem story means that you need to have some way of expressing that some network service endpoint is to be isolated and then expressing who is allowed to connect to that network service endpoint. And I would maintain that whatever mechanism we use for that should use sort of the selector on labels kind of approach. The current thinking has been selectors on labels around the advertisement of that network service and the connection request for that network service. But does that start to make sense to you, Runky, in terms of what the thinking is and how that meshes with the things you're finding? Yeah, definitely. But I think, yes, perhaps we can go through some more examples next time. Yeah, yeah, one of the things just is you jump on the IRC channel, there are conversations around this stuff all the time. So for example, we've got someone from Orange who pops into the channel most mornings and there's a bunch of conversations that have happened about trying to steer network service wirings closer to virtual hosts and they were originally thought about in the terms of route rules. And so lots of really interesting things happen in the IRC channel on these conversations. So if you wanna pop in there and start a conversation about this there, I think that probably would also be very productive. Perfect. Yeah, just so you know, those conversations eventually get bubbled up here even if the originator can't make it. So we wanna drive architecture and so on through these meetings. So feel like you're cutting the rest of the community off if you decide to have a conversation there. It'll come back here. And with that, I need to cut off the meeting because we're a few minutes over. So thank you everyone for attending and we will see you next week. Thank you. Goodbye. Take care, take care. Thanks. Thanks very much. Thanks.