 to just rub it in. I could have like actually overlooked that, but now it's right in front of my face. Just choke on it. I just finished dinner. Just digging into a little dessert. If I don't stop the call, I want to hear you choking on that later. Yeah, yesterday I'll share this since we have to waste the first obligatory, like obligatorily waste the first three minutes or so. I tried a, well, it was like a chili popsicle. So I think I had cayenne pepper and some pineapple chunks and something. And so, yeah, I was really, really out there on our limb. I think it's an address. I don't know if I'd brave a chili popsicle. Yeah. It doesn't really sound like dessert. Oh, right. I mean, throw some sugar in it. It'll be fine. Jim, Jim, there's, oh yeah, it is blue shirt Thursday. The thing is is, you know, this is like, this is like a pre... There you go. Pre-QCon, QCon type deal. This is, this thing used to, back when this thing used to fit now. Always better to be too big than too small, right? Would you come have a chat with my wife? Tell her that. I just bought a bunch of shirts, which were one size bigger and a bunch of labels, which were one size smaller. Stitch them in. It's exactly the same size. I've never not changed. Welcome all. We're gonna get rolling in about a minute. On the meeting minutes are posted in the Zoom chat. Be sure to jump in and plop your name in there, if you would. Some of us might even be able to make some new friends today and we've got some old faces and some new faces. So this is great. I would just say ice cream always helps the social network, right? Very good. Amy is with us, I believe. Ken is with us, I think. Yep. And Mr. Klein is there. And we're about four after. Let's get rolling. Thanks everyone for coming today. It's nice to have a few of you fresh. So good. So if you haven't been on a CNCF SIG network call before or really a CNCF SIG call, this one's not much different than the rest. And we do adhere to, well, hopefully adhere to CNCF cultural values, but also just in terms of recording these calls and posting them publicly. We do that. We ask that you be respectful on the call. I'm excited about today's agenda. It's actually most of the time that we get to meet that I get to be excited. A lot of times we're looking at really interesting projects, nearly all of which end up inside of the CNCF at various levels. There's a number of you on the call that represent those very projects. And so, if you're, you know, just also we gave a, at this last CubeCon EU, we gave an intro and a deep dive and Matt Klein and Ken Owens, who are co-chairs on SIG network have helped, have given those, I think we've given them a couple of times now. Our initial one was, had a little bit of what today's theme will have, which is the overarching theme of today is a call for participation. So you'll probably hear me say that a couple of times. Last time that we met, so we missed the beat, I think because last time we were gonna meet was during CubeCon week. And during CubeCon week, you just CubeCon. And so, but the time that we met before that, we had discussed a couple of times over the notion that there were a number of topics and kind of work streams forming specific to service mesh topics. And we, I think we discussed what some of those were in this call. Some of them, we wanna discuss what those look like today and part of the charter of the service mesh working group. Last time we met, we had agreed that our core agenda for the SIG network had dwindled down to leave enough space to use this same meeting time for service mesh working group. That's subject to change based on future needs to do project reviews or other topics that come up for the SIG network in general. So the SIG network is not, doesn't only just look at service mesh things, but that's today's focus kind of in context of a subgroup, the service mesh working group. We wanna introduce that today and talk about some of the work streams and really, again, like an overarching theme here is to solicit interest in the projects that are presented as well as potentially projects that aren't listed. So, Matt or Ken, anything that we wanna note before we take a look at service mesh working group slides? Let's dig in then, fair enough. All right, so the service mesh working group, by the way, just, I think the call that we're on right now it's got a few enough people that I'm hopeful that it becomes pretty interactive. So please don't treat this as a formal call. Anything you say will be recorded and posted, but that's not the point. The point is this, thank you for coming today. If you like what you see, please come back, please express opinion, bring ideas, help shape what we use this hour for. Yeah. The quick question. So you're just not gonna give a monologue then. Yeah. Okay, I just wanna make sure. Let's try to break that a little bit. Yeah, no, thanks, Steve. Yeah. So we're fortunate that of the three forish projects that we'll talk about today, you guys will hear from others and not just me, which is nice, moreover, as they come up, please just interrupt, like Steve did. So there's a couple of things around service meshes that there's a collection of you that have been either asking on about or working on. And we're trying to help uplift those efforts, shine a light on some of them and bring others to bear on them, to influence them. So to Steve's point, not just questions and comments, but influencing and directing. So of the projects that we're about to talk about, which we could either refer to them as projects or maybe work streams, I think today, they'll be introduced and they won't really be advanced because it'll be about introducing them and making sure that people are understanding that their vision and whether or not you wanna get involved and do things there. So at the end of the call, we'll do again kind of a call for interested parties and we'll figure out how to have times in which we can go fairly deep. A lot of things can be done asynchronously in terms of advancing these projects. There's some commonalities across them. One of those is the notion that the CNCF lab is an excellent resource, particularly for, as we look at things that, anything at scale, doing tests at scale, a lot of times that has to do with performance. And that's, I don't know, arguably an underused resource, maybe. It's been well used in a number of instances, a lot of very interesting analyses have come out of the use of those labs. So I expect that those labs will be used for a couple of the projects here. Ideally, that those labs and the analysis of them, a lot of times, these are point in time things, software changes. And so should you maybe run a test again or do an analysis again to the extent that people consider that it's warranted. So I wanted to call out that resource and part of the goals of these initiatives are in fact to publish a few things. Some of that's either, whether that's analysis or maybe whether that's service mesh patterns. So this as a topic onto its own is a personal interest to me and I know it's a personal interest to others that are on this call. There was a, if folks are familiar with Paul Bauer of Microsoft, he had been really interested in this space before and been trying to help organize an effort around identifying patterns, documenting them, sharing them in a vendor neutral way. And I'm really biased. I think that this is a great venue for vendor neutral stuff. And I think, you know, so does he. I've been pinging him. I think that that effort might have puttered out but I'm hopeful that it will pick it up. I'm hopeful that any number of you or others will participate in this. This is to say that there's any number of service meshes, by the way. There's like 20 plus or more. Depends on how you want to count. Depends on if you count different distributions of Istio as individual ones or not. But people, it's fairly obvious, I think, to everyone on this call that, you know, between different service meshes, they can be used for common purposes. They can also be used for different purposes. Not all of them are built toward the exact same vision or toward the exact same goals. If they were, we would probably have less in the world. And for my part, I anticipate that we will have less in the world at some point. I would expect, like a couple of others on this call, that before this year is over, there will be more in the world, not less. So the call to action here, the call to interest here around patterns is to help achieve part of the charter of the CNCF SIG network, which is to inform broadly. And that would, in this case, this is informing by providing reference, I don't know what other word to use other than patterns, I guess, but like references of common uses of this tech and the patterns by which they're used. And I know, well, Nick Jackson, you're on the call here. I know this is of kind of a particular focus for you and to not make this a monologue. A different way that you would characterize this or maybe certain examples you might give to help get people's... But I think the thing is about patterns. It's about looking at the common tasks which people want to use a service mesh for. Because ultimately, I think what we're trying to do with service mesh is move basically network reliability code and network observability network security code, which is spent a lot of time inside of the application and move it out. But conceptually, whether that sort of logic is in the service mesh or whether it's in the app, you're trying to do the same thing. You're trying to maybe smoothly balance the routing between different services. You kind of want to implement, I don't know, like layer-specific routing. You want to do things like managing sort of load balancing to caches. You want to be able to handle reliability that you have with unreliability that you get from sort of dispute systems and networks and dynamic systems in general. And the core thing about that is that we're doing the same thing. So we're trying to do canary deployments. We're trying to ensure that there's blow-off of, like pressure cooker safety valve circuit breaking on our services so that we don't end up with sort of critical cascading crashes. We're trying to do things like sort of balance traffic across regions. We're trying to fail traffic over to regions and manage it across multiple different clouds. And I think those patterns really can be distilled down to, I mean, there's probably quite a few of them. But you have that sort of commonality and it doesn't matter regardless of what industry you're working in or which service mesh you choose. There is that commonality. Now, I believe that by educating people on the patterns of use, it really helps people move forward distributed systems. And I'm very, very, very passionate about this. So on that particular topic, it's a general call for interest and participation. So do signal if those are of interest, if whether you're wanting to help produce those or identify them, work through them or just be a consumer of them, provide feedback. There'll be sort of ongoing discussion and work there. And that's in part because there's a lot of service meshes. Nick, you and I were talking about this a little bit earlier. And I think there's, as I said, kind of like, it's a common problem that everybody has. I genuinely think that the service mesh or the kind of patterns which go in with service mesh is going to be in any distributed application. When it comes to abstractions, they're becoming increasingly important because you need a way to think and rationalize across different service meshes. So things like SMI is trying to do that. It's trying to take a sort of a rational approach which says, look, instead of this very specific yaml for controlling traffic routing on brand X of service mesh and a very different method for brand Y, what we're going to try and do is consolidate consistent practitioner experience and operator experience by saying, this is how it's an abstraction and interface layer into the underlying implementation. And SMI's kind of growth around that and thinking, first, just about the kind of the core workflows, but hopefully like expanding it out to be more encompassing is really going to help the practitioner because as a practitioner, you're probably going to find that you change jobs, you're working with a different mesh or a different cloud or a different set of technologies. The easier that we can make maintaining and holding knowledge of service mesh operation, the more that it's going to benefit the kind of progression in that area. And kind of on a kind of similar vein to that, you've got to be thinking about performance. And this is really important for a number of different reasons. But service mesh performance is another working group which is inside the CNCF, which is looking at how you can kind of describe and manage mesh performance. And we'll look at that a little bit next slide, but then you've also got connectivity. Now, CNCF's own statistics and many other statistics including Gartner are kind of showing that a high number of people are operating in a multi-cloud world or they're operating heterogeneous environments. And they do that for a number of different reasons. They do it through acquisition because they want to sort of do develop a choice because they want to be able to kind of balance, hedging their bets across multiple clouds or taking advantage of various different costs. But the key thing is that we want to be able to connect all of that together. So VMware has a specification called Hamlet, which is an open source specification currently looking at CNCF. And the idea around that is looking in a common interface method to manage things like catalogs synchronization between the different meshes and identity federation. So it's going to be a really nice to hopefully see a standard at the forefront that you're going to get Istio and LinkD, Istio and console, VMware, Tanzu and console sort of mesh and kuma. Everybody being able to integrate together. Why that benefits, it benefits the practitioner and also it will benefit the vendors because they're no longer being constrained on integration. People can choose the right tool for the right job. I'll pop up the next one, yeah. So service mesh interface conformance is a project which has just been picked up by the CNCF. And the intention around this is to be able to say, right, we're going to bet on the SMI as the method of kind of defining an interaction with a service mesh. And it's actually important to understand which of the service meshes adhere to the various different sort of capabilities of SMI. So for example, policy-based routing or just kind of traffic splitting, traffic routing. So SMI conformance is going to be a project which looks at that and it'll be able to run automation against the service meshes which are subscribing to be included in there and also to kind of be able to kind of say, right, this particular mesh implements these features, it implements those. Again, it's about the ability to provide consumers the ability to make the correct decision for them and do so in an easy way. Layer five and machinery is kind of working around the sort of the particular tooling to facilitate that. And it's pretty exciting. I think it's a really great thing to be able to benefit with SMI as well. Hopefully promote and push that standard. Is this service mesh interface conformance the same as the service mesh performance you talked about? Or are they? No, two different things. Yeah, sorry. So interface conformance is basically does this mesh implement this particular interface? SMP, which, can you flip a slide there, Libert? It's all about providing a kind of a standard way of measuring the various different outputs and performance capabilities of a service mesh. Now, why do you want to do that? Well, you know, benchmarking is one thing, but there's a number of reasons why you need to be able to benchmark. You want to be able to benchmark to understand a change or a potential change that you're going to make into a system. I think one of the things that will take time when user could being educated is that, you know, service mesh is not free. Now, I can't give you numbers, but potentially you could, if you're running layer seven right through your stack, you're doing things like inspecting HTTP headers, you're kind of double buffering a lot of that request information into memory as you process it, et cetera, et cetera. That takes CPU, it's memory. You could find by switching layer seven inspection across your entire service mesh, it increases your CPU and memory counts and overall sort of resource consumption by 10%. Now, as an operator, you want to be able to make a decision based on that because increased consumption has increased cost. Do you really need that capability right through the network? Do you just need it in bits and pieces? Service mesh performance is through one of the goals is going to be able to enable people to better make that choice. For vendors, what vendors can do is vendors can leverage service mesh performance to be able to run things like regression tests across the various different versions of their software, which benefits them. It helps them to kind of keep on track and ensure that they're sort of keeping performance and all of the tooling be there for them to do that. But it also benefits the consumer because that index benchmarking can be used to educate. Now again, like one of the goals of SMP is to compare apples and apples. It's no good comparing the speed and performance of like a layer four TCP connection to a mesh connection which is running on layer seven because there is a definitive speed and performance overhead of the degradation on the ladder. So SMP is designed to kind of measure things accurately and comparatively. The other thing that kind of SMP is going to enable we hope is sort of the variety of plug-in ecosystems. So the ability for the likes of let's say DataDog as a SaaS platform to be able to provide specific service mesh metrics and to be able to do so by kind of just consuming an interface which implements the performance specification. And yeah, and the hope around this again is a universal performance index which is kind of gauge efficiency and efficacy. It's important for the consumer to have the choice. They want to be able to make a balance of decision between feature and speed. Not everybody has exactly the same requirements. Not everybody has the same requirement in the same application really. So big hopes for SMP to be able to make some headway and get some standardization in that space. As people digest that and formulate questions and comments I'll toss in this perspective that well, I guess in general a perspective that I think a lot of times we see in our industry that infrastructure gets somewhat commoditized and just from myopic perspective of a service mesh if you're looking at a data plane or control plane or management plane that those lower planes would get commoditized kind of over time if we sort of watch the history of some of how infrastructure runs or at least from my perspective, I don't. I see in a lot of respects the opposite happening here that the data plane can be quite intelligent. As a matter of fact, efforts that some of you are on this call are directly helping with in and around pluggable filters whether those are WebAssembly or other or native to the project means that you can ask even more of a service mesh and it can be even more dynamic to the extent that those are dynamically pluggable. And so the ability for people, for us to use common nomenclature and a way of sort of exchanging a format to say, to discuss how much it costs to run a more highly intelligent piece of infrastructure or how much you're saving in terms of the time that it would have taken to perform that task. Otherwise, from my perspective this actually becomes more important over time as data planes are fairly powerful today and potentially get even more so going forward. Comments on this? Questions on this? Jim, your note on the difference between SMI and SMP good note or good, quite a common question when you hear about it on the surface, I think I'm highly complimentary the two in so much as one SMI facilitates a standard interface for describing a traffic split, for example while the other one provides a standard unit of measure of that traffic splits performance. Okay, so I took a quick look at the SMP site that's linked on this page. And so is it not part of CNCF today? Like is it a standalone effort? Because it shows contributors CNCF which makes me think, okay so it's an independent project, not part of CNCF can you help me position a little bit to understand that? Yeah, good question. The hope is that in a couple of weeks this becomes a CNCF project in part yep, so to be concise, it's not today and those partners that you see listed are in agreeance that we wanna bring it over here. Got it. And that it's, well, I don't know how you quite gauge this. It's relatively young in its life cycle of its development, I guess that part of, it's a concept that's been around for quite some long time. This got started really really in the Istio performance and scalability workgroup. As a, I don't think it wasn't called this then but there was an acknowledgement that such a thing would be really helpful. There was an initial set of YAML that was there to kind of help describe that. And so in engaging in that working group this is kind of rolled into something that we can give a term to and hopefully roll into the CNCF and have broad participation. And I think part of the goals here are that if there is value found that this unit of, that this common way of measuring a common way of describing the environment and what you're doing will be that you would find either implementations of it in each participating service mesh or that there's a canonical implementation of SMP today inside of the Measury project which hopefully would go the same route would come into the CNCF shortly as well that either the service meshes themselves are implementing this spec or that they're maybe they're running Measury in their pipelines to be able to perform some of the things that Nick had just said around regression analysis of performance with each build or with each release and doing so in a consistent manner. Yeah. Okay. Houston is maybe going too far off topic but do you see this as a standalone sandbox project conventionally or nested in with SMI or something else or is that a big TBD conversation? I think the intention as a standalone sandbox project to the extent that there's a, why my wife hates it when I use this term but there's a smidge of an overlap. How many of them? The overlap being in the best of ways of like the very simple examples of if in SMP if you're going to say, hey, the service mesh that's being measured is Kuma as a random example. In SMP it's a collector to currently it's a collection of proto files and in there it has names of meshes. If there is an SMI proto file which there isn't but if there was a common way of describing the fact that this thing represents Kuma great that you can use the same moniker to identify that mesh. But yeah, they're really complimentary and to the extent that that helps, that would be great. Go ahead, Kevin. Thanks. Does the service mesh performance tool set currently make use of SMI metric implementation or is that a planned thing or are the two just they just don't meet at all? They do actually can see Nick's head going up and down which is a little bit to the example that I was giving around like traffic splitting is that one configures the environment the other one can measures it. Of what SMP is today is a specification or it's a there's a reference implementation of that specification in the meshery tool and meshery does implement SMI. It also it does both. It speaks to the service meshes directly as needed but it'll also leverage SMI to the extent that it can. As a matter of, yeah. As a matter of fact, I think that answers it, yeah. And actually sort of to Nick's prior the SMI conformance bit of the work stream like, yeah, but meshery is very much so aligned with the goals of SMI in terms of helping validate the good questions. I think I'm on the wrong, I'm sharing the wrong deck. Too many decks. So Leigh, I have a quick question about SMP. Who submits the performance? Is it the SMP working group or is it a work stream or is it a project or who does the actual calculation of the benchmark? Yeah, good question. So the, I'm trying to bear with me one second, Steve. I'm trying to get Leigh Shrigan. This is the, so how do I, SMP itself collection of proto files. The first implementation of it has been in meshery. Meshery as a tool will, well, here's a kind of a good example. If we can use this example. So when meshery, and this is the, each of these projects, by the way, they are intentionally like mid-flight or they're being presented mid-flight so that folks can influence. And so I caveat that to say that the way that meshery is providing conformance today is it's running a suite. It asserts a bunch of tests and makes a bunch of assertions, runs a bunch of tests, provisions up to eight different service meshes, tries to ascertain whether or not they're compliant with the SMI spec and bundles up those test results. And will, and has the ability to persist those, send those off remotely or just persist them locally. And so I use that as an example of kind of the same way in which it has, it implements SMP is that at, and actually the next discussion, actually, maybe this is a good to kind of roll into the next project because there'll be a demo of this. The way that meshery implements SMP is to describe the environment, capture the detail, do the thing that SMP does, but also to run load tests, collect the results, do some statistical analysis, and it'll have, it'll collect that test result in an SMP described format, which it can also send back and persist. As hopefully both SMP and meshery go into the CNCF, one of the things that meshery has been doing is for those that have been running it, and again our hope is that each of the service meshes that find value in it will run it in their pipelines that it would not only send, transmit back SMI conformance of that mesh, but also send back performance tests or SMP formatted test results. So therein live, I think part of the answer to your question which is like, hey, who's, one of the things that the vision for meshery as a project has been, and a lot of people have asked like, hey, where's the performance analysis? Like, where's that paper published? And the group has been really hesitant to do that because a lot of times you end up making an ass out of yourself and everyone else, because, and rather we try to give people tooling to let them go do the analysis themselves as we potentially use the CNCF lab to run some of those analysises. We would call for participation from each of the service meshes to ensure that things are configured in the right way that we're getting is apples to apples is even possible, which isn't entirely possible. But that rather it's the service mesh manufacturers or the projects themselves that are empowered with the same tool using the same common format to send in those reports or keep the reports if they want to. Or both. Thanks, Lee. Yeah, good, good, good, good. Well, good, let's get it. And actually I hope that there's a little bit of a demo here that will help follow on Steve's questions. So, Kush, there's some distributed performance analysis that you've been working on in combination with a couple of the Envoy Nighthawk maintainers. Do you want to tell folks about this? So the problem was that many performance benchmark tools or analysis tools are limited to single instance load generation or single for load generators. So this limits the amount of traffic that can be generated to the output of the single machine that the benchmark tool runs on in the cluster or out of the cluster. Distributed load testing in parallel was a challenge when merging results. And like we need to maintain some of, we need to ensure some certain factors like we don't need to lose the precision. And we also need to gain insights on high deal percentiles. So we carried out this project forward and the project was proposed as a Google Summer of Code idea for CNCF. Summer of Code acted as a catalyst to exclude the project. So the project didn't only enable us to have distributed performance benchmarking, but as we know that different microservices behave differently in different workloads and exhibit different signatures. So the project will also enable us to understand like how different microservices will exhibit the characteristics and different workloads. So for the project we collaborated with Nighthawk maintainers. Nighthawk is a Layer 7 performance characterization tool that was created by Envoy team. And hopefully it's going to support distributed performance, distributed load generation soon. And we took Mishri, Mishri which is a service mesh management plane and which currently supports WRK2, Fort IO, and Nighthawk as single instance load generators. So we involved Nighthawk into Mishri as a, with the help of an external library which was created by us, namely Go Nighthawk. The library acted as a middleware to consolidate the implementation of Nighthawk into Mishri. Here's the link of design spec in the slides where you can see how the complete idea is proposed and how the plan of action is carried forward. So this is a design mockup which is right now for the interface. Users will have ability to choose between single instance load generation and might instance load generation. We have also given a choice of load generators to choose from Fort IO, WRK2, and Nighthawk. You will also have ability to process the results and you can compare different results and benchmark analysis with each other. And the service mesh performance spec which Lee was just talking about and Nick Jackson explained it briefly. So in the Mishri results, we have also implemented a canonical implementation of service mesh performance spec. So I'll just show you a quick demo of how the load generation takes place. I hope my screen is visible. It is. So let's just quickly limit this. Here we need to specify any URL. Different, there is some... All the different load generators behave differently with the DNS entries and the IP versions of DNS entries of the URLs which we have given for the test. So different load generators sometimes make up a different results and different benchmark analysis. So here's the result which we just got from the load test which we ran on the website Google.com and using the load generator in Nighthawk. If I will just navigate into the results tab, I can see there are a variety of results and I can just select some of the results and I can see a quick comparison between the results and moreover we have the canonical implementation of service mesh performance spec. So if you will just click on download, you will see the performance results which are gathered in the format of the specification which we and Nick was talking about. Yeah, that's very nice. One of the things to push I noticed in your demo, you hit a server, an endpoint that wasn't on a service mesh which is maybe a good call out that's one of the first things that people want to understand is like, what are the performance characteristics or the differences between running my service on the mesh and off the mesh which is kind of nice to be able to do. You were noting some of the differences in the, well, algorithms I guess is what I would say the statistical analysis that each of those load generators use a bit of a difference in the way in which they might generate load as well. Boy, I'm going to forget the actual, the term here. So none of those load generators are the type of load generators that academics like to use. As a matter of fact, Mr. Sahu Pratik has been collaborating in this area for a while. I just noticed Pratik is on a PhD candidate at UT Austin. Pratik, help me with the type of load generator that I'm trying to... Hey, yeah, Haley. So there are two types of load generators that we look at like the open loop load generators and closed loop. So like just to see how much can we push the servers usually open loop load generators is what we at academics like to focus on but most of these load generators are closed loop load generators which rely on the response, how many... Did they send out a request only when a response has received a thread? And that is, I believe that is the distinction that Lee is mentioning. Yeah. It is, yeah. If I recall off the top of my head like, hey, there's a reason why there was WRK now there's WRK too. I think the difference being like coordinated omission is part of like how the different load generators perform in terms of being open or closed and then also in terms of how they assess, do their analysis from when they start measuring to when they don't. So anyway, that's being done today in a single, from a single load perspective part of what Cush and the Nighthawk maintainers that he's engaging with are working on distributed analysis which I think will unlock and I think Patek does as well and the others that are involved unlock some new insights. And we're now in a world where we're running lots of microservices, a popular microservice might enjoy a lot more use than anticipated particularly some East West traffic that it wasn't maybe wasn't designed, it wasn't initially designed for. So I'm excited to see like to enable people with easy to use tooling and a standard measurement mechanism for understanding that and characterizing it. Comments. So it was not an example of a distributed load test where we running multiple Nighthawks from multiple clients. Okay. Yeah. No. And so true to what I know is the simple answer which is I'm glad you asked each of the projects or like I would say the project that Cush and Patek were just speaking to is 50% of the way there if you will. So either an excellent time to present to get influence from Mitch and others or maybe not the best of times to present until it's all done. It depends on for our part on these projects are like whoever has come to bear and come to influence and provide insight has been I hope, you know, like really warmly welcomed and so. No, it's good, you know the work in progress, that's great. One of actually Mitch, just as you just putting your forcing you to put an Istio hat on the one thing that would be insightful both toward the service mesh patterns that we were talking about on the start of the call and here is like with the and this isn't favoritism. This is a fact because I've spoken with every single service mesh that's out there. I can prattle more often you can anyway that the, well the former performance and scalability working group had their cramped together so to speak or like had, you know in combination with some of the folks at IBM and Google and others that would come in there had a number of benchmark common tests and things that they would look for whether it was X number of envoys or this many namespaces or this a lot of things that have much bigger scale than I think that will certainly than those that have been working on this tool have and that's in part our aim toward the CNCF use of the CNCF labs is to be able to do some things. My point is there are there's a lot of knowledge from within that working group that particularly just like here's the type of test that should be run and part of that's like I'd said some of the examples I'd said or part of that is based on workload type it might be the same exact test but it's a different type of workload. A lot of the people that we've engaged in this project like a very common question is we have both so like what are you using as your example workload? Are you like and Pratik will bring up the like, hey, are you running an instance of Git labs infrastructure for example or some social network or some database heavy thing or some and anyway to the to the notion that we're only halfway through getting influence from others about what types of easily repeatable tests there should be is would be really helpful. Yeah, I think the number one thing that I would take away from the work that the telemetry group did on regarding performance which is now being kind of folded into the test and release working group but it is that the details are very important. So looking at that YAML file it's possible that these fields exist but that they're just not populated but it would be great to be able to annotate it with information about the details of the test. You know, this was being run with MTLS enabled or MTLS disabled or off Z policies and it was run against this type of a client application being able to track that from one test to the next then you get the ability to say, hey, when I kept all of the details the same but only changed MTLS here was the impact of that one minor change. Each one, I think that's the only like, yes, absolutely the and that is the example that Kush had just shown was like, it was a 15 liner or something. That's it, that's I'm showing that like of what's defined in the spec today is what's a good example is a bit more of exactly what you're highlighting which is like, which is exactly why there's tooling being why we're investing in tooling being built is because because good God, I think performance engineers don't get paid enough because yeah, there's any number of, there's their litany of did you have any grass gateway or not, or did you have any grass in them? How big was it? And then just like to your point like, you know, any variation and any number of these variables really has an impact. Maybe it's, I was just gonna say real quick, I think it's great that you're getting people together to work on this. There was sort of a lack of interest in Istio and performance analysis. I mean, we reached a certain point where like most of the performance tuning was in Envoy itself. So that we dissolved that workgroup because there was a lack of interest. It was like two people would show up and they talked to each other and it's the sign up for workgroup. But you know, if there's 25 people involved that might be might get more out of that. So I think that I think this is great. Thank you. You know, Lee, to that point, do we expect to see substantially different performance numbers from different Envoy based service mesh implementations? That's a good question. Yeah, I won't name names. I had that conversation a few times. My gosh, I had like a year and a half ago. Maybe I would just say, I would maybe put it like this, like, hey, within the control plane, does having mixer in the control plane or not make a difference in terms of, I guess it's a rhetorical question, I guess. I think this, I think just add, I think this could be something which might be more for the future because Envoy is obviously currently opening up extension points inside with Wazim filters which are running in the hot path. And once folks have got control to in effect change the operability of Envoy, you probably will see a greater variance in the various different service meshes depending on which filters they use or how they use them. And I think one of the common things around that at the moment that you might see which is a slightly different is things like Ex-Auth because Ex-Auth is a call out and then you've got things like rate limiting, which again is a call out. But I think that the variation will probably grow as Envoy becomes more extensible outside of the core code base. That makes sense, thank you. And then Mitch actually, I had curious for your feedback on what I was alluding to around like one control plane not being necessarily the equivalent of the other. And I'd mentioned, or just like to the extent that Mixer was doing a lot and is still doing a lot, but in a different area, like from your perspective, I'd had an early conversation with product manager for Atmash and I think that was their same perspective. It was like a couple, it's just Envoy data plane. So what's the, well, the control plane is it has a depends on what you're doing. Yeah, no, I think highlighting that Wazem will really be a game changer in terms of comparative performance makes a lot of sense. If all we're doing is serving simple XDS listeners endpoints, I would not expect to see, I would hope to not see a substantial difference between the two. Those APIs are relatively tight in terms of their implementations, but yeah, Wazem's a whole new frontier in terms of performance, so that makes sense. And I don't know about other service meshes, but at least in Istia, Wazem is becoming a part of the de facto or like the default implementation of the data plane will have Wazem filters loaded. So seeing performance difference there makes a lot of sense. Blake, I won't necessarily speak on behalf of consoles roadmap, but unless you will. Thanks for putting me on spot, Lee. I think I'll just add that that is something that we are looking at like the other service meshes that are out there. I think we see a big opportunity for Wazem to allow users and operators to do things above and beyond what we as a vendor have built into the product. So there's a big potential there from an extensibility standpoint, but it's something that we're keeping our eye on. And obviously that ecosystems early and maturing, but I think as it matures, we will look at what opportunities we have to incorporate that into console. Very good. And then published, what's today, the third? So published yesterday was a linker D blog talking about kind of the road ahead for linker D to proxy. Turns out Wazem is a popular thing. So kind of number four on the list was, you know. I'm personally looking forward to the day when all my application code is going into the service meshes Wazem and I don't actually have any microservices whatsoever. I just have proxies and Wazem modules. Watch this KubeCon 2024 for the horror story by company X, which says why we thought putting all our business logic into Wazem was a really bad idea. And now we suffered the major established of our lives, but people also do it. Yeah, they will. And this is important. This is like for my personal perspective, this is important why we've been investing a ton of time into this space because I think that there's a bunch of application infrastructure code. Like we talked about service meshes taking care of infrastructure concerns. And I think there's a lot of parallels between serverless things and service mesh thing. And there's a very similar value proposition. Service meshes I think speak really well to absolving applications of some of those lower level considerations. And going forward, there's even some, like I would quote it as like application infrastructure. There's a lot of, so Nick mentioned an ex-auth before, like there's a lot of commonality in application infrastructure, users, tenants, price plans, a lot of things that you need around the actual business logic that you're trying to achieve. Or that some of those things, the service mesh were an intelligent data plane filter or they're already looking at that header and they're already, so as people go to explore that and they go to prepare for their 2024 talk that they might be able to have a common vernacular to describe that they might have easy to use tooling to test that. I'm very excited about WISM. I think it's gonna present incredible opportunity. I think there's a lot of fears around is it gonna be the next ESB? I would argue that the ESB was actually probably not the world's worst pattern. It was more so the implementation that was wrong about the ESB. But parking that aside, I think one of the really interesting opportunities is when we start looking at security. And one of the kind of the core competencies of service mesh is the ability to do microsegmentation. Now, the kind of the concepts behind why we need to do that is because the firewall as a perimeter is not not as successful a form of defenses as we would all hope to there's ways around it. I have like a crazy vision of somebody building like a micro distributed WAF which would run as like an envoy plugin filter. The ability to do kind of individual request level inspection on an East to West base rather than just on that pure North-South. And when you start to kind of like think about that as becoming a norm, I think you really, really start to need to be thinking about being able to accurately measure and so to consistently sort of reproduce the measurement of tests and things like that. Lee, you mentioned the cost of running control planes and Nick, I think you referred to that as well. Right now, the results that we saw today, we're talking about latency on the traffic, which is probably the number one concern of most service mesh users is what sort of latency characteristics are they gonna see? Are we going to see also execution costs in terms of CPU and memory for the data plane and control plane? It's a critique, Mr. Yeah, the short answer Mitch is that's absolutely critique just presented on that at KubeCon EU. There's a few early results from some of his research up, because yeah, I think that that's, I think actually being able to articulate that in a granular way, so that people can make decisions on whether or not to take a sprint of the dev team or a couple of sprints to go do the thing that you might otherwise do with a filter. Like part of that decision, hopefully part of that decision is, what does it cost us over here? What does it cost us over there? Thank you. I really appreciate all the questions today. This has been a really nice people gotta go. Please signal your interest in the Slack channel or on the mailing list or any which way you want to. We'll try to organize a bit, get some things going asynchronously about providing a place to put in thoughts and comments and bring your influence. I'm looking forward to this. I hope that this is, as vendor neutral as we can get or as toward the end user as we can get. And it's actually in part, we're creating this because there is an end user service mesh working group talking about patterns and they're having all the fun by themselves. They don't want the vendors over there. And that's all right. But then the vendors aren't working on the patterns and the feedback that they need. And so, yeah, we need to work on those. Yeah, thanks guys. This looks great. Thank you all. Talk to you soon. See you guys.