 Nicolay. Hello. Hello, Taylor you'll have to correct that if I did it wrong. And Nicolay I couldn't recall the spelling off the top of my head so let's just let's just say that you're lucky that I can even announce you at your first name halfway correctly. No problem. I usually just copy pasted from the previous meeting. Oh, yeah, good. Hey, there it is. Maybe the other way of thinking of it is that I should consider that I'm lucky that I cannot halfway and I see it correctly. So I don't get punched through the zoom. Good deal. Folks, we are a couple of minutes in going to put a link to the meeting minutes in the chat. These are community meeting minutes and it's a community call. It is a CNCF call so we do record the meeting and post it to YouTube. But the participation in the meeting and participation in the meeting minutes open to everyone. You don't need to be a member of the CNCF or be representing a particular project to have an active voice and bring up topics and things like that. So, so please don't be shy and if your name isn't in the meeting minutes, please drop it in couple will probably lolly gag for another minute or two. Now's a good time to make a call for agenda call for topics. So if you have any of those, please toss them in at the bottom of the topics list. And I think that we will get to them today we should have enough time. I hope is is that we will be joining us today. So I'm flooring are you on my chance. Yep. Hi, I am. Very good. Good deal. Nice to meet you flooring. Please thank you. Why don't you, you know, I hope hopefully there's some flattery involved in your topic and your kind of your area of focus being of such interest that we're asking for an encore from your on boy. Yes, there is. Thank you very much. Good deal. Good deal. Yeah, you were the that was to go back into the meeting minutes that was kind of a subject of discussion. And we sort of meandered last time that we spoke but but that ends up being a good thing, because then we've we stumble into topics like the one that you're focused on. So given that it is five after, and we've got some folks, it's it's probably it's probably time to get going. Taylor, I'm going to toss on maybe another, like we've got maybe a couple of other topics. I don't know if how much either time or how much desire we have to try to dig into them here but this is a very appropriate venue for some questions around work that you guys have been stewarding. Yeah, I was actually about to add it so nice. Okay. So, with no further floor and with no further ado, if you don't mind just giving a brief introduction and taking it away with telling us about on boy and VPP. Let me see if I managed to share my screen. Can you see my slides or do you see my nice and okay. Perfect. So then that means it works properly so I guess. Hi everyone. My name is Lauren course with Cisco technical lead, also a final VPP project maintainer and point with today's talk is that would be to give you a high level overview of the benefits of using VPP as unvoiced network stack. Now my background is in networking in particular I'm one of the co creators of the VPP host stack. So I typically talk about transfer protocols and software layer implementations. However, today I'll mainly focus on how convoy can leverage user space networking and some of the benefits there are. Now, before we dive in and in the interest of those of you who are not familiar with VPP, which I hope not to be that many. A very, very brief, quick introduction. VPP is an L2 to L7 networking stack, which at its core leverages to important ideas vectorized packet processing and the modeling of the forwarding as a directed graph of notes. Now, when these are done correctly, they ensure really efficient use of a CPU sketch and hierarchy, and consequently, a minimal overhead per packet when doing software forwarding. Now, another really important aspect of this approach is composability. That is starting from the simple ideas one can implement all types of network, let's call them functions from device drivers to help work features and then tie them together to build a really efficient pool network processing pipeline. Looking at this from a less abstract standpoint, it might be worth noting that VPP simply is together with DPDK, so it supports a large set of network interfaces. Although should be noted that it also has a smaller set of really efficient native drivers. It supports L2 switching, bridging, IP forwarding, virtual routing and forwarding, or VRFs. So it has the right constructs for L2 and IP layer multi-tenancy, but in addition to these basic L2 and L3 functions, it also supports a multitude of additional features. And just to name a few, a very efficient IP second implementation, ACL, NAT, NPLS, segment routing, various flavors of tunneling protocols, so things like VXLAN and LISP, for instance. On top of the networking stack, VPP also implements a custom host stack built and optimized in a very similar fashion. As one might expect, it supports the commonly used transports, things like TCP and UDP, but also TLS and Quik. It has a session layer or a socket layer. Now this one provides a number of features, but perhaps the most important for the context of this talk is the shared memory infrastructure that can be used to exchange IEO and control events with external applications. Using per worker message queues or what's depicted here as MQs. Finally, to simplify interoperability with applications, VPP provides a communications library or VCL, which exposes posits like APIs northbound towards the applications. I guess now at this point, some of you may be asking the indescapable question or maybe not. Why yet another host stack? And you'd be right to do so, because from a functionalist perspective, Linux is obviously the one stack to use. However, because Linux's networking stack was designed around the single pass run to completion model per packet performance is limited. And this is especially noticeable when hardware acceleration cannot be leveraged. But in addition to the speed that could be provided by a faster transport or a faster socket layer, the fact that the stack is in user space could be leveraged to optimize integration. And perhaps even minimize the amount of data copies that happen between the application and the stack. Also, because the whole protocol stack is packaged with the application, it could potentially be customized or extended in certain situations. One can certainly imagine scenarios where, for instance, the socket provides more context data to the underlying layers with the aim of improving network utilization by the applications. Also know that this does not preclude Kubernetes integration. In fact, we would be can be used as a data plane by CNIs like Calico. So how exactly does Envoy integrate with DCL and what sort of changes were needed in case anybody's interested. And by the way, I forgot to mention this at the beginning. Stop me if you have any questions or feel free to tell me to skip over the details if this is not interesting. So coming back to this rather intuitively, let's say the first step was to make sure that Envoy components do not make any assumptions with respect to the underlying socket layer. And consequently, they always use generic socket interfaces, such that they can potentially interoperate with custom socket layer implementations once they're available because initially none beyond the Linux or Windows or macOS, depending on how it was built, was being used. So obviously, this was not exactly glamorous work. Most of the changes were not features, they were more focused on refactoring. Still out of the set of changes that have gone in, perhaps the most notable are the fact that as a community we decided as a core rule, we now must avoid using raw file descriptors anywhere in the code. IO handles are still exposing the FDs. But last time I checked, I think we've managed to clean them to a point where they were only used in a couple of places. We've added support for pluggable IO handle factories, or in other words support for multiple types of sockets then can be used at the same time in the same instance of Envoy. An interesting consequence of the first point is the fact that file event creation is now delegated to the IO handle implementations. So the desired side effect of this is that the socket layer that provides IO handles is the one that decides how the events for this IO handles are created. In other words, we now socket events are no longer tightly coupled with lib event. Some coupling still needs to exist and I'll go over that in a second, but now that one is not implicit anymore. And finally, perhaps an interesting scenario that might serve as an example for the community going forward was TLS, which mainly for convenience reasons relied on bios that needed explicit access to the file descriptor. But it eventually turned out that writing a custom bio that uses the IO handle as opposed to the file descriptor is relatively straightforward. So we actually switched to that. So I guess this reinforces the first point as much as possible, although it's going to be a longer path, people should try to use everything, all the means necessary to avoid using the raw file descriptor. Now, this changes are enough to allow the implementation of a VCL specific socket interface, but they still leave one or one more problem to be solved as I alluded to before. Namely, both lib event and VCL want to handle the acing polling and the dispatching of the IO handles, but only one of them can be the main dispatcher. Now, the solution to this problem is to leave control to LibEvent and to register the event of the associated to a VCL workers message queue with LibEvent. Now, if you recall, since I'm relying now on you remembering my previous slides, the message queues are used by VPP to convey IO and control events to the application. The event of the now is used to signal the message queues or the MQs transition from empty to non-empty state, just that. So this ultimately means that VPP generated events will force LibEvent to hand over control to the VCL interface, which for each envoy worker uses its own locally maintained Epo loop or Epo file descriptor to pull afterwards the events from VCL and subsequently dispatch them. Now, these are just the stepping stones for the envoy VCL integration and as first next steps, the plan is to further optimize the performance. Now, the lowest hanging food, there are the read operations as VCL could pass pointers to socket data in the shape of buffer fragments, instead of doing a full man copy. The groundwork for this is already done. In fact, since I've done this the first time, I've gotten it to actually work. But what's left is the actual, let's say, integration. It's not enough if VCL avoids the mem copy once, if envoy gets the data in its filters and proceeds to copy several times afterwards will obviously lead to inefficient usage. So there's still some work to be done that. But speaking about performance to evaluate the potential benefits of this integration, I used the following topology wherein WRK connects through VCL and envoy, which performs ACP routing to a back-end NGINX. Now, this type of scenario might not be relevant in practice and in fact, I'd be delighted to learn from you or anybody else who is using envoy in practice or deploying envoy in practice what would be interesting. Nonetheless, for the purpose of this experiment, this is ideal because on the one hand gives us an idea of how many VP workers are needed to load envoy and on the other it gives us an upper bound on performance. So at a glance, these results show us that for an equal number of cores consumed, one VP worker is actually enough to outperform the kernel by a significant margin. Performance seems to be very good, 20 to 40 percent better and to scale pretty well. The star in the margin there is that after a certain point about four to five workers, performance does not scale linearly anymore and it behaves somewhat worse for larger payloads. Albeit should be noted here that for this test in particular, TSO for VPP was not enabled. So backing up or as a summary, results are really encouraging from our perspective, but there are still some things that need further investigation for a better understanding. So we're clearly faster than the kernel, but we need to understand if the scaling of the performance has to do with my test bed, has to do with VPP, VPP VCL and VPP, sorry, envoy integration, or maybe the problem could lie in an envoy, so we might need to further optimize some code there. With that, should you be interested in further exploring the envoy VPP integration, please give the code a try. You have a link now there to my GitHub. It's a bit stale, maybe a month old. I still need to upload the zero copy version of the code. And in case I won't be able to answer all of your questions here, feel free to email me or grab me on an Envoy Slack. With that, thank you very much. And do let me know if you have any questions. Nice, very good. Well, before I ask a couple. Florence, thank you. And it's an open floor for for others who might might have questions or comments or, or feedback for Florence. I, I have a couple on my side, if I can. Sure thing. Okay. So, for me, can you please get back a little bit in the slides. Let me know what slide I should stop. Okay, let's, let's stop with this one. Okay. The next. The performance. Yes. Okay. So, um, this is, this is good, of course. And again, as we already said, thanks. Thanks for the presentation. This is an interesting work. So one of the things that probably this group would be interested is how this thing, I don't know if you have played with it or do you have any thoughts around this, how this thing can be used in, in like in the cloud native, let's say, landscape, like, if I want to deploy this within Kubernetes and use envoys as a sidecar, which, you know, somehow depicted here if you, if you, if you will. Yep. I mean, is there any out of the box? I don't know solution any ideas. Do I need to use any, I don't know, customized Kubernetes or whatever. I don't know what you can say about. Right, so let's, let's let's look at several things here. First of all, could you use VPP within with Kubernetes and the answer would be yes. And as I mentioned, and in case you I'll pass over the slides, there's even a talk now happening at KubeCon with respect to the Calico VPP integration. So Calico can use VPP as data plane. Coming back here. Right, so what sort of integration should we expect or what sort of integration would be possible for for envoy with Calico VPP and then subsequently with with the applications and now there's several modes of operation. Yes, you could deploy envoy as a sidecar and then have that attached to VPP and you can have several instances of those employees. So not only one. The question afterwards is how do you connect the applications to your envoy and what I'm depicting here is the general case which probably is the safest case as well. And maybe we should dive into that a bit. Remember here the integration is shared memory. So what's happening here right now is shared memory. It will not offer you the same sort of security that the kernel offers you today. What I'm depicting is NGINX talking to a TAP interface to an envoy. So this fits precisely the model of deploying envoy as a sidecar together with your application in a container and then VPP acts as a let's call it the container switch slash router. It's a it's part of the CNI or programmed by the CNI. Another mode of integration would be to have only one envoy per node, let's say, instead of adding one envoy for each container. Now it's well known that envoy does not spawn namespacing at this point and there have been efforts from others in this direction. There have not been upstreamed. So if we wanted to something like that probably we will need to change envoy. And finally, the most efficient way of doing this within Kubernetes would be to not leverage TAP interfaces but actually leverage something that we call cut through sessions. So if two applications attach via VCL to a VPP instance where that VPP instance what it offers is the socket layer functionality, the host stack functionality, but if both of those applications attached to the same VPP instance, that socket functionality is actually not required. The kernel is known to be inefficient in that you whenever two applications attached to it say using TCP, the kernel will actually go through the TCP layer implementation. So it does a lot of extra work that it should not do. Well, with VPP we support what we call cut throughs meaning VPP attacks that both envoy and the application are attached to the same VPP instance, and it uses pure shared memory buffers to to exchange data. Now this comes with a caveat as I mentioned before this is shared memory. So one, we haven't put that much effort into securing, properly securing this. And that was a very long answer I hope it clarified at least some of your, your points. Okay. Can you capture what you are interested in or were you waiting for something more specific. Yeah, yeah, I mean, I guess that there's a lot that can be discussed here probably we're just like, according to the discussion maybe offline. So, okay, my second question that will be the last because I saw that Taylor already added the CNF initiative than they started one of the things that that actually folks and other groups are trying to actually bring into the cloud native world is this CNF or cloud native networking functions and if we if we see this and voice networking function. I know if you have thought about it I understand that it's a kind of an experimental but a question and maybe an advice. Could this be possible to actually actively figure out if there is VPP available or not so that you can actually deploy it in the public cloud infrastructure the same, let's say, a container employee. And then if there's no VPP to just do whatever I use the standard kernel interfaces or is it, is it so heavily modified that it can only function with VPP or how is this let me see if I understood the question correctly so you're wondering if, if from from the application or maybe from. No, no, no, no, no, no, I'm sorry, from envoy point of view, assuming that that I want to deploy only envoy in a container, not as a sidecar not as anything I consider the invoice my function that I want to deploy my application. The version that you have in your tree and essentially I don't know how this is going to go forward but let's say that I want to use this version is it so heavily modified that it cannot function with the standard sockets. Or, I mean, can you switch back and forth with the same binary. So, as, as mentioned on, or as I tried to highlight at one point, we now have support in envoy for pluggable socket interfaces. That means that you can even have multiple socket interfaces active at one time so you can have for instance in envoy now a kernel socket interface which actually you always have it is the default one. And then you can have a VCL socket interface. You can then based on envoy specific mechanisms in particular, for instance addresses can can request either default processing, or an address could come with a hint that says please this address use it on on VCL socket. And then I'm going to make make sure to to open the right socket for you or through the right socket interface so true to your question. Yes, we can switch. So, you can bring up with the default kernel interface and then you can, you can load this additional module. And then we said that the code that sits in that branch that I mentioned is, it's an extension but not an official extension of envoy reason why it's not an official extension is because envoy builds a static binary. So we're going to build part of VPP in order to build that extension and the at this point that's pretty envoy already is building way too many things. So I have not tried to push this upstream now my in my conversations with Matt, we, we sort of decided that if there's enough interest in having this upstream we could upstream it. And then make sure that everything is built together and then you will have at runtime just some switches that you can flip and then you can use either the kernel or or VCL. Okay, okay. Thanks. Thank you. Would that be a deployment time configuration or. And it be, do you think that could be dynamic where it says, what are your capabilities and it wants to have, I guess, maybe VPP is preferred. And if it doesn't then I could fall back to others. Interesting. I think answer could be that we could we could do the second option that you mentioned so. So if you that if you have the right means of detecting a VPP is is active starting on boy. Well right now one of the option is to start up just to say, I would like to use as a default. The VCL socket interface as opposed to the kernel socket interface. I'd like to remember but I can't right now if we can change at runtime, the default socket interface. So, when I say default, I mean, if, if addresses are not injected with the right attributes into on boy, they will all default to using the default socket If you, for instance, configure on boy with addresses and by configure with address I mean when you pass an address that goes to a resolver. If you configure that resolver to always default to to assigning VCL as an interface. You actually do not care so basically this will be a configuration that you can inject at runtime and say whenever you open the new connection to a back end for instance or for something like that. Make sure to use the VCL interface, not the kernel interface, and you can do that explicitly. If you just, if you are just worried about default behavior. Then you can detect when VPP if you can detect when sorry on boy starts, you can just configure it to to use VCL as opposed to the kernel as a default. Lauren question. I might you may have answered this in one of Nicolai's questions but the succinctly like the availability of since VPP is user space. But when I understood it might require its installation requires maybe another module, a kernel module or two that you wouldn't commonly find available in popular cloud providers environments. Is that an accurate characterization of it kind of. As far as I know it works on any of the default kernels. It's more of the stack right below VPP what it uses and you're going to get into the DPD case so and then you start looking at integration or something else, but you don't have to use it by default. That's exactly so. I had a good description so it all depends eventually on what on the DPD case needs that you have on any. If you deploy if you deploy VPP with say without the PDK and now it will depend a lot on what sort of drivers you try to use SR IOV AVF or anything else. You will need just the dependencies for those but normally it should work with with all current well with all modern kernels let's at least stipulate that if you're thinking about kernel modules that might be needed and are not typically provided, I'm guessing you're thinking about something like the FIO PCI or stuff like that. Those are typically needed for the PDK. Gotcha. I've seen more problems on the physical host side like are certain things turned on in the BIOS more than it does the kernel work. And then you get into stuff on like privilege mode. I'm wondering how are you going to access if you use say MIMI F devices to talk between containers pods, then having access to the device files has to happen. So you have to figure out something there. Got you. Okay. Yes. So. Given those, given those requirements, is it. Is there a specific. Well, she take easy to, for example, is there a specific easy to type that. And OS, I guess that's sort of, I guess this is also one of those it depends on what functions you're going to use but. Yeah, maybe it's. Fair enough. Sorry for interrupting. We know so VVP can be deployed in EC2 and has been deployed, but as you said, of course, with specific functions, I've never tried doing this for instance with a boy. We've done it with IP sec, for instance, and just to see how fast the implementation would be and with DVD K. It seems to be working fine. Nice. Okay. It's been some time, but I recall trying to assist in CSR. Well, I think it was the CSR 1000 V and nexus 1000 V kind of going after secure tunneling use cases between. Okay. Very, very good point. Not for the effort, but as far as a, as far as we can tell, and when I say we I mean the community though, not talking in the name of my employer, but the community. We've managed to get this to work in those sorts of scenarios and performance used to be pretty good. Maybe better than the other than you've mentioned. Another quick question, it might have that I heard this out of context or I didn't hear the right is there was part of your discussion was about, was about an envoy per node, and some caveats around that and I didn't quite catch the use case or the need for for that architectural model. Very good question actually so the problem that after not that I've had any time in practice was that envoy communication to the upper layers say to this deal becomes the ball net when you have too many envoy instances. So for instance, you end up into large or moderate deployments and up needing hundreds of megabits to gigabits per second of control traffic when they in case of a restart event massive restart event let's say. So the idea was the solution to to that problem was to, well, let's have one of our node and have that be multi tenant, as opposed to having multiple small instances of envoys that we load that side cards. I think so you have been working on that. I don't know exactly how far they've gotten with it makes. But thanks for that makes a lot of sense. Questions from others for flooring flooring this this has been nice. This is. It's a special treat, I think, for some of us, we, we can switch between doing project reviews to me and during between a bunch of topics to receiving presentations like this and the nerd in me appreciates a good set of diagrams so. So, thank you very much. Thank you very much Lee for for the invitation it's been my pleasure and hopefully it's been useful for you as well. Yeah, it really hasn't actually on Ed. Thank you so much for this connection this point. Appreciate all of you asking. It was, you know, so let it set it up. Always happy to hear floor and speak. And, you know, it was, it was good that this came up in the course of conversation. I am going to, well I'm fumbling with zoom. This doesn't happen that often but. I think I think I had to press the button. We're not using WebEx otherwise I talk about the ball, the peripheral ball. Very good. So next, next couple of topics up. I think the next couple are relatively quick is more about probably awareness. So, for some of you who've been on these, the last. Some of the more recent calls we've used this time to opportunistically discuss some of the work streams that are taking place inside the service mesh working group. So the service mesh working group just a subgroup of sort of the subgroup focused on service meshes. The network itself as a much broader field of field of view. It's worth noting that we had been hosting those discussions kind of those set those sessions at this time kind of using this time to advance some of those initiatives. A couple of the initiatives within there are people are requesting more time to discuss and advance SMI conformance is one. And the other one is SMP briefly. A number of you are familiar with SMI conformance. Some of you are SMI maintainers. This initiative is well, actually since Taylor is on, I'll use a common analogy that that is used for CNF conformance and it's to say that the specification, there are, I think, seven service meshes that signal compatibility with the spec. That's great. The last couple of major service mesh announcements of new meshes coming into the ecosystem were SMI compliant actually the last three for maybe and so as there is a sonar boy to Kubernetes to the 90 something distributions of Kubernetes is kind of a there's a an SMI conformance a mastery to SMI to help validate conformance to that specification and so there's there's a recurring meeting to be scheduled to help advance that initiative. If we are organized about this will send out a poll to ask what's a convenient meeting time for not organized about it you'll see it on your calendar. So in comment on SMI conformance before move to this other one is touches up against some of what Florence was speaking about with respect to the some of the value that use of VPP provides some of that was around performance. There's an emergent specification called service mesh performance or SNP within context of the discussions around SNP. This week, we were meeting with on boys load generator called Nighthawk meeting with their maintainers and discussing a number of things but one of those things is in some context to what Florence said earlier about different so on boy has different distributions and there's a project that assists with that that project is called get on boy. As surface mesh is gaining popularity and his performance is a question on a lot of people's minds or is a continued question just as and when people use surface meshes. So the proposal, tentatively named to get Nighthawk to help with create distributions of Nighthawk today that's available in a Docker container. And so, there's a sub stream kind of inside of the SNP discussions. And that's to be scheduled so any comments on SNP or Nighthawk. Good deal. And so we've got about 15 more minutes left, a couple of topics. Taylor is it given that there's 15 minutes I don't know if it'll how long is each one will take but I thought I'd ask if one of the two of these that you would like to prioritize for discussion first versus the next, or consider that kind of hand in hand. Well, they're related but they are, I guess, independent pieces so I, I could probably do the first real quick and then move on to the next which is maybe more important. Could I share my screen. So the cloud native principles. It's trying to these papers which are here in this repo. It's a whole set of papers talking trying to break down the different concepts that are all tied into what you have right here so just when we go and look at what CNCF has in this minimal set of information. Part of it talks about what it's going to do, like benefits and how this, you know, works as far as groups, there's actually not a lot that really talks about what do these main. So this has been an ongoing work for quite a while and maybe the newest thing and I don't know if you've seen this specifically Lee, but from getting feedback talking to different people in the TOC and other places we'd created the fundamental concepts area so this is would tie into what you see on these definitions. And most of these would be agreed by most people trying to keep it more generic and not Kubernetes specific but the, this is to lead up to these other set of papers. So starting out with a club breaking down what do we mean by cloud native and going into each of the concepts I think I just clicked on the wrong one. This one was the one I meant these actually start breaking down all of those individual concepts and you'll have an area here that's more English. And then it's talking about how it ties together with references so that's the big thing this isn't just coming from the people that have been involved. People that are creating software telco service providers was a lot of the focus on here and networking folks. But these references are a lot of different people doing things and DevOps and networking in general and cloud native and the whole set of papers is building through to eventually what it gets to this area so what do we mean when we say cloud native networking. And going down and trying to answer different questions. And then it actually breaks those down into further pieces so you have stuff talking about what do we mean by microservices immutable infrastructure, and then getting into the OSI. How does it relate to the OSI stack. And that's really the main thing here. These set of papers. It's also available as good but they are leveraged by several different communities, and there's been a lot of collaboration from people on them. And CNTT I don't know if anyone's familiar with that they LFN community they they point to some of these. But it's at this point it's something where there's a lot more people within I'd say CNCF in general that are wanting to have more of this well defined and so that's the effort so I'd be happy to get more eyes on that. How much time do we have nine minutes. The. So to cool. Yeah, this is right. I guess a quick, quick, well quick point of clarification that. The discussion that we've had in this SIG a few times has been about the cloud native networking principles and but the the overarching initiative is to further refine cloud native, which I have to say you guys are you guys are sick puppies for trying to take this on because like what a well one there's some natural contention with trying to just define all all the components of what all the characteristics of what makes something cloud native and and expanding in that there's been a similar initiative that was proposed by an architect at Microsoft. And it was to start with a bunch of was on patterns. And it was to start with surface mesh patterns. But, but his vision was to define much of that pattern like cloud native patterns for all the things of which it was hard to fathom that landing or be or like ever congealing. So just to clarify then I guess the question is the cloud native networking principles those are the deepest set of papers thus far. Is that accurate or are there are there some lengthy papers on what it means to be a microservice or to be loosely coupled or to. I don't think there's been anything that pulls it all together that's as extensive as these sets right here and really what we're saying is the score. So this one is a build up, but it has you can see a ton of references these all go into lots and lots of different books and people that have been doing this they don't all say cloud native, but you know this managing the cloud, you know, but that goes all over. I don't know of anything that's as extensive as, as these sets. So it's kind of an aggregation of all those. Yes, it's been, I'd say, brutal to say we're taking on trying to say where does what look at all of the layers, but what we've found specifically, and which on the next is the CNF conformance when we're looking into telco and how to try to help bring some of these things where the philosophies from like DevOps or CICD is just a norm on for enterprise and everything else. And try to bring a lot of the philosophies and methodologies that are already commonplace you have to go further back. Yeah, it just doesn't work unless you have those concepts well defined. And I think that's why it was like, here's a vendor meta switch, which I can, I can drop this in there, and they just came out with something in November and these papers were done more than a year before that. But it's, it's taken this long for vendors to actually start talking about why virtualization doesn't work and you have to rethink things in a different way. So they're coming from the enterprise and other side, where you're just like I already, I already agree, it's a given. But they go in and talk about having a re architect you can't go in and just utilize your, your application as is move it over into containers. And see what this is about like, how can you say that without breaking down what the underlying principles actually mean and how they're applied and how they're going to affect you. Got it. Happy to chat more if people want on that. I would like to at least just mentioned the CNF conformance program and not the presentation that happened this week to the TSC. And it was primarily about a new working group, but that I'm going to actually go over into. It's probably easier. So there's a new working group that's being formed. And there will be a first kickoff meeting at. At KubeCon. It's on the schedule. But the idea with this conformance program is to have something similar to what the Kubernetes conformance says, and the way the Kubernetes conformance program breaks down underneath as you have the conformance working groups, sig architecture, sig testing and they're all handling different aspects within the CNF conformance program. We have the conformance test suite project. So that would be equivalent to what sig testing is doing. But as I think Lee you might have mentioned said something about this with Sonoboy earlier. We've actually this project is created the test suite to look a little bit more like Sonoboy as far as like configuration and other stuff. But then it actually has tests within it that are actually there versus Sonoboy has a plugin to run the external test, which you could have run directly using the framework in the Kubernetes CD. So that's where the mechanics and actual tests are implemented. Yeah, right now it all shares one repository with this new working group will be defining what what it means to be a I'm going to bring up the charter. So what it means to probably the biggest one is what it means to be conformant. With regards to cloud native best practices for CNS. And, you know, one of the things that we're pointing out is data plane CNS. So I think the stuff that we were talking about today with VPP and on boy and stuff is very important for these. When you look at a CNF or application providing network functionality that's at a non data plane layer it's maybe a lot easier to talk about its behavior and best practices because it's going to look more equivalent to stuff that's already in development or SIG app delivery is already saying here's some best practices. But when you get down to data plane CNS and other ones, maybe operators and stuff that are tied in. It starts to get a little bit different on what does that look like on best practices. So this working group is going to be focused on that. And as far as the initial scope and the process like what is the process just like Kubernetes you walk through a certain stage you run sauna boy have pull request there's a bunch of things. So it'll do all that decisions. And then, as I said the Tesla project to be separate. I mean, the coupon if you have time and the future ones definitely want people that are no networking and application development and we're trying to get them working with the cloud native site and then working with the service providers the telco people. I know we're at time but I can answer any questions. Taylor, did you was there any further feedback from the TOC from from Tuesday's presentation. Not much. I mean, it's mainly that they're trying to get more people engaged on it. And I know that, you know, I went to SIG app delivery yesterday and here today because there's a overlap on the way these things work. But I think we'll see more by cube con and continue. Okay. And he's like so it's we used to host the Oh, the CN the CNCF networking working group, sort of before SIGs became a thing. And the networking working group sort of rolled into SIG network or became SIG network. And the structure as it is now with with SIGs is that they may end up spawning any, any number of working groups within the SIGs. So, so I guess in part what I'm trying to say is that I think sees the life cycle of a SIG sort of operates in context, I'm sorry the life cycle of a working group operates in context of a SIG. And so, yeah, getting a landing spot in a SIG makes makes a lot of sense as kind of a home base. There's an example of a, we just talked about the service mesh working group but another one within CNCF SIG network is the UDP, the universal data plane API working group or UDPA, which is an envoy the envoy API more or formed around the envoy API. Any feedback from SIG app delivery from your presentation yesterday. I mean they're, they're all interested they have a, there's a air gap working group that's the one telco focused working group that was in SIG app delivery because most of its non non networking telco type apps. And that one SIG, sorry air gap is, is a more of an edge type of focus so it doesn't match up to. It doesn't cover most of the stuff that we're talking about specifically on like core, the core network type of network functions. And those type of things are going to look different from what air gap covered, but there's interest and there's at least. I think from the standpoint of best practices and stuff there's going to be a build up on multiple areas like there's going to be things in SIG network that we want to have covered and included in their stuff from SIG app delivery. And that's just listed as different groups I mean that we think this is just a subset but the ones where we think there's going to be collaboration. Okay, it's a cloud native network CNF cloud native network function. Yes. Okay, native network function. Versus we're not saying containerized network function. And there is a lot of different thoughts on what network function, whether it's a name that's just a marketing term or if you're going to take it and break it down to what the intent of those words are. Which is why part of the scope is making sure that it's communicated also within the working group. But we're saying, but right now you could, I would say, think of it as, and this is from some of that even the service telco service providers is a telco or networking application. And those that are participating right now are telco centric telco and kind of network as the first in scope kind of data plane data plane conformance. Yeah, so for as far as conformance goes, then it's trying to provide something for that right now it's for the telco space I mean there's some of the service providers have said, telco is a subset of the networking domain, so then it becomes broader. As far as that goes, but the idea right now is to help telcos and actually becoming more cloud native. And right now it's saying let's focus on the applications that are deployed on their Kubernetes based platforms or distros whatever you want to say. This is good. If you please follow up with Taylor if you have questions about this I recognize we're five after site. So we'll end it here for today but thank you for and thank you Taylor. It was a full agenda. Same same time in a couple of weeks. Thanks for having me. Thank you.