 So I'll, I think we've probably have a quorum for today. We have, he joined also from the horizon. So we'll get, we get kicked off here and then in a few minutes, Lee, after you're ready, just let me know and I'll pass over to you. The first part of the course, I want to kind of talk a little bit about the work groups, future aspects, right? And so there's a couple of things we've identified that we want to work on in this work group. At the same time that we've been, you know, having meetings and discussions, the CNCF TOC has also been having meetings and discussions on creating SIGs. And one of the SIGs that they've identified is a SIG called the Traffic SIG. And it's not the first one that they're working on, but I've asked them to look at it second. They think that our network work group should probably fold into that Traffic SIG and become, you know, we become part of the Traffic SIG versus having a work group outside of the CNCF TOC SIGs. And so, you know, there's only, you know, a handful of us on this call, but I'm assuming you guys, I see Lee shaking his head. So I'm assuming most everyone's in agreement that that's the right next step for the work group. Yeah, Ken, for my part. And actually, did they, was that SIG previously named network and maybe just refactored to be traffic so it incorporated more or less? It did start out as network and then it changed the traffic without me realizing until maybe a month ago. And I sent an email to the TOC privately asking them if it's, you know, if it's gonna be named traffic or network or network security, because I kind of asked them what they're gonna do with the security piece because they didn't really have a security, it's safe discussions, right? But safe wasn't really a SIG. And so I kind of raised that question and never got a response actually. The last TOC call I couldn't join, I had a major incident mascot we were working on that I had to jump into and help work on. And so I don't know if they even discussed it at that TOC or not. They sent out a vote. I think the vote for the SIGs was just to have SIGs, not on the specific SIGs they were going to have yet. And so I think that kind of approving, first of all, that we're gonna have SIGs within the CNCF. And then the second vote would then, I guess, be discussions on what group we'd wanna form and what SIGs we wanna form around the new model. So along with answer your question, Lee, is I believe, yes, it changed from network to traffic and I'm not sure where the security aspects are gonna land yet. Okay. At the risk of opening up a rabbit hole that has no bottom, there has always been this kind of uncertainty, well, it's not really an uncertainty. My point is there's a Kubernetes SIG network and Kubernetes is not the whole of CNCF. But there's not much that's in the gap and I think that definitely confuses people or possibly we're doing it wrong. I mean, when I go to coupon aka cloud native con and talk about CNI, people ask me questions that begin, what does the CNCF think about blah? And my answer is I never really heard from the CNCF on the subject of anything to do with networking. Yeah, Brian, I'll add some commentary. I think that potentially with the reboot of this working group into, or as the SIGs come forth and this actually reboots into the traffic SIG, that there's actually, there's a litany of unaddressed topics and conversations, projects that are out there and that need analysis or need a venue to have to be presented and have discussion over. Yeah, it had been, I think, at least for my part, the networking working group had been one of the first to curate and suggest their project. I recognize CNI was like the 10th one in, but I don't know how many other SIGs or how many of the working groups have really gotten that far and kind of done that process. It was kind of after CNI, after we've done that initial duty, just I think between people's personal schedules and a bit of, you know, in the wake of uncertainty about what, how far a charter can go and how much responsibilities are empowered to, you know, how much it's encouraged and how much the working groups themselves are empowered to define part of that, you know, to go off and do things. I haven't spent a lot of time inside the service working group that can also do is that one certainly kind of took it upon itself to expand its scope of responsibility. Well beyond, it even has something of a sub or sort of generated some working group on cloud events, which in essence that working group spawned the new CNCF project in their spec, your cloud events. And so the, at least all of the verbiage inside of the SIG doc that's being voted on outlines a lot of that a bit better about the mechanics and how that should work. The delineation responsibility through the TOC and the SIGs. And I totally agree with you and it's in part why I know Ken was guiding us over the last couple of years toward a definition of the roadmap for the working group and kind of what topics are there where we need to address and how do we, how do we either generate our white paper and get some more activity in discussions just really more participation. I'm hopeful that for my part I've got kind of fingers crossed that one, Ken is able to take on a similar role in the forthcoming SIG and then two, that just, I think the collection of new entrants and new interests, I think we'll see it maybe changed. I think there will still be a bit of a contention and like it's like a lot of the stuff we talk about is in context of Kubernetes. And so even for CNI, when like the delineation between who has responsibility and governance over individual projects, I think that's hopefully broadly clear to everyone that it's those projects that are self-goverting. And so it's not these working groups that lower it over them and tell a project like CNI that you need to update this or change that to be network compatible or whatever that I think that was kind of a point of confusion. I don't know if you felt it being a CNI maintainer before in this call, but like, anyway, I think it's a little bit of a different, it's almost been, how long we've been doing this like a year, this networking work group. So. It's been about a year and a half now. So it's been going on for some time. Well, to your specific point, yes, I knew the projects were self-governing. Yeah, no, sorry, not you. I mean, not into you, but I just mean, like to the layman or to the un- Yeah. It's, yeah, people kind of fill the vacuum with their own idea of how it might work. People in the outside. Can you get part of the process that you're taking us through to outline, you know, topic areas? It sounds like we're headed into that being good input into the, you know, the traffic scene. Yeah, yeah, I think the main one we've discussed or around, we can't talk about like service meshes, service, service chaining as a, not related to service mesh, but, you know, has the same first name, of course. But, you know, it kind of do things with like, you know, firewall and load balancing and, you know, DNS. We also had some presentations on IPv6 support and how do we get, you know, IPv6 more of a standard supported pattern within the cloud native models. Most of the cloud native solutions out there are not IPv6 compliant or, you know, capable even for that matter other than just IPv4 to IPv6 translation layer, right? We also talked about things like QoS and then how to, you know, how to kind of map some of the QoS type of models into a cloud native framework. And then, you know, some of the, the day two, we talked a little bit about some of the day two aspects that are needed in networking. This kind of, what I think your presentation originally came up with was, you know, kind of talking about some of their ability to visualize traffic and network and how that means like cloud events coming out of the other, you know, the serverless model, right? How do we sort of declare wet parameters or wet metrics? I guess I would say need to be monitored and managed in a cloud native network model, right? That helps with the deployment patterns for cloud native models. Just us having brought this conversation up, I wonder if we shouldn't, as we had, so Ken, as we were considering kind of networking work group deep dive for QCon EU. I wonder if that isn't, we got a couple of months, I wonder if it isn't a time document that, yeah. Or yeah, or in order to like potentially boot, you know, I don't know if the scene is really boot, boot that, sort of kickstart the traffic scene, you know, to kind of add a part of the formation. Sounds good. And as a point of information, the CNI project has not applied for a spot on the agenda. Well, not yet. I mean, it's past the application date, but we can probably get one. But I would think the bigger thing, the traffic zig is a better one, because I think it better matches what people think they're showing up to. Actually, Brian, a point of curiosity with your CNI maintainer hat on, has there been much to do about the Istio CNI? Can we add it in there? It came up. Yeah, I mean, that's a good example. CNI as a project doesn't care who uses it. You know, it's a spec, it's an interface. And so the fact that the Istio project has found something useful to do with CNI is just great, move along. Yeah, I mean, that's exactly the kind of thing that trips people up, because they kind of want to say something. And that to, with my CNI maintainer hat on, the most impact for, I can say, is we're not gonna put something into CNI that stops someone like Istio doing something. Or we're not gonna make a change for Istio that stops Kubernetes doing something. We're not gonna make a change for Kubernetes that stops some other project from doing something. That's the main line that we draw, kind of be a neutral interface for everyone. And you kind of get that far in the conversation and people go, well, that's kind of boring. Right. Yeah. I guess to your point, the implementation work is being done in Istio, CNI, repository, in ISR, in that project. So the work is done, the implementation work is done in the other projects, in itself, in Spain. Spoken with one of those contributors to that project in context of this next presentation, Messery. Mostly just to gain an understanding of part of the Istio CNI goals. Anyway, it was enlightening to just point of interest. And then Ken. I think that's sort of why I wanted to leave that and I think we'll take some actions to sort of, I might send a note to the mailing list, kind of identifying some of these areas we talked about and get some feedback from everyone on it. And then we'll take that into the SIG formation. Nice. Nice. And with that, I'll turn it over to you, Lee, to kind of give the presentation demo. Very good. Well, as I go into this, let me call out Grish, who's on the phone as well, who's going to partake in this presentation. Grish and I just made these slides 45 minutes ago. So they're pretty well-vaked at this point. But there's a couple of reasons why Grish is here with me presenting. One, because he's a driving force behind this. Also two, he's part of the Genesis story for why we got off of our lazy rumps and laid down some code into a project that has been dubbed in Mashery. The Genesis story is that he and I had gone around to this last year giving about five different ESPO workshops, introducing people to service mesh concepts and kind of teaching them, going through a bunch of different labs on how to use ESPO. And as part of that, over that year, there became a pretty clear theme. And at least for me, there were two questions that were universally asked, almost as the first two things out of people's mouths once the presentation was over. And that had to do with, well, let me, that had to do with this. It was kind of, it was two things. It was going from an adopter's perspective, which most of the audience generally was, people not using service mesh, people coming to ask these two questions. It's like, well, hey, this is basically, this is great. I see the value, there's a lot of promise here. So which service mesh do you recommend I get started with? It looks like you've spoken to a number of them. I go, okay, I'll withhold my response, but the acknowledgement there is that, just like, and there's many, many examples of this, but just near and dear to my heart has been the container orchestration wars and the facilitation of familiarization with the container orchestrators and what's different about them, which ones are better for different use cases or for your particular environment. And so as we enter into the, I don't know if it was 2008, I don't know if it was 2018 or if it's this year, we're adding to the year of the service mesh, but as we enter into this sort of third phase of people's cloud native journey, at least in my mind, and we enter into the service mesh that they adopted, but hopefully we can do, we collectively, maybe can do better about facilitating people easily deploying these things and understanding and gaining some familiarity in a sandbox environment. You know, I consider that, and maybe I've got it wrong, but I consider the branch or in the days of container orchestrators probably did that pretty well, really facilitated easy deployment of sort of your choice of the more popular orchestrators at the time. And so it's our hope that maybe mesh, we can help facilitate that, but more pertinent to our discussion today and to this working group is, the second question that we commonly come and it's, you know, basically start to say, well, great, I see the value of service mesh is a very interesting cost. And then the engineer in them says, but uh-huh, what's the catch? Like this doesn't come for free. There's overhead here. What's, you know, you're not telling me the, you know, what that looks like. And so, measuring is intended and what it's being worked on foremost right now is to be a performance benchmark tool. And it's to help illuminate and answer that question, what's the catch? What's the overhead? Hopefully in, well, what's gonna say layman's way, hopefully in a, with a decent user experience. So one that maybe errors on the side of being usable and potentially not as complex as different projects might need for it to be. We're gonna talk about the various projects and contributions, interactions that are going on within the tool itself, within the community, but suffice to say at the moment that some of the projects, I can see a future where they're potentially using measuring as their performance benchmarking tool. I'll highlight why that might be. And so, yeah, so it's those two questions that are kind of front and center for the tool. I think that that's an adopters question and they're going to, I think that ongoing and open source tool like measuring can be valuable to the operator or the developer. I mean, just basically that, you know, day two, after you've stood up a playground and mess with things and you've chosen a service match or maybe you've chosen two because you're in such a large organization that you've got multiple heads and people are doing different things, that it can still provide a sandbox for understanding a playground or a sandbox for understanding new functionality as new versions of your chosen service match continues to grow and you can go play with those and experimentation of those and doing it in context of your application. Some service matches, it facilitates this better than others but to the extent you're able to deploy them easily with a tool like this, it could help. Then also, there's this question about performance and it's an ongoing question. So it was the initial, what's the overhead? But, and maybe you've deployed, you know, Istio 1L, for example, and then 1.1 comes out and then very quickly, 1.1.1 comes out to fix the bugs. Well, did that introduce any new overhead? And so the ability to do these performance tests and keep a history of them, do them against either sample apps to just facilitate kind of playing with the mesh or do them against your own applications in your environment to persist those results and to let you share those results. The tool will facilitate collecting those anonymously and as part of the project's goals to publish that back to the public about hopefully in a way in which doesn't make any particular mesh look poorly but just really helps facilitate adoption and answer people's questions, helps make all of the meshes look good to say that, hey, on average, the percent of CPU used is only this much for any given service mesh. So what are you waiting for, go use them? So that's the kind of genesis of what we're hoping that people will get out of it. We've had sort of got our initial start with the University of Austin, Texas. So I'm here in Austin and my draw doesn't come out all the time but try to drop it y'all occasionally. And had a professor within the computer electrical engineering school donate the, well, or get two of his graduate students to wrap their thesis around this tool and analysis of performance. So we're getting some assistance there. There's a high performance computing environment they're willing to give us time on. We've clearly gone to each of the more prominent service mesh projects and engage with them and describe the functionality and request contribution and gotten very positive feedback and just spent a very given hour with HashiCorp yesterday on console. So there's a couple of, the way that mesh itself as a project is structured has adapters. So we'll look at the architecture here in a minute but sort of in the works are adapters for these more prominent service meshes. We've just met with the app mesh team physically a couple of times, discussed this tool very early on and we get to really go back to them and say, hey, it's in a place where you may want to consider in writing your adapter now. Anyway, just as a community, we meet once a week, record our, take meeting minutes, record the meeting minutes, post them on YouTube. There's a couple of, I think the project's been fortunate to be asked to be presented just yesterday, asked to be presented at service mesh day this week at DockerCon coming up. It's on the schedule for QConU and then in the intern at container world. So that's great. But the nice thing is to just trying to bootstrap interest that both for LinkedIn and Envoy, it's been, or it's part of the goals of this project are have been incorporated into the CNCS project list. So we'll see if we can update people assisting there. So maybe enough with some of the boring stuff. Take a look at the architecture and do a bit of a demo. So architecture, relatively simple tool written in Go. It is intended to be a utility that can contain them. A Go binary that's containerized that you can deploy on your local development machine or into your cluster if you want to. You can deploy it onto the mesh but probably recommend that you don't. So you're not, that's one lesson variable in the environment as you go to do performance testing. It's right now, it's using Fort IO as a load generator. Fort IO is the load generator of choice for Google's performance testing of Istio. IBM's performance testing of Istio uses a proprietary tool wrench patrol that they do some wonderful work with and publish some of their results. It's a very interesting working group that's going on in the Istio community. And they've been, if you pay attention to that, the latest Istio 1.1 release, there's been some significant strides in terms of performance there. So realizations around the cost of telemetry, like the cost of really much more of like the distributed, you're gathering the sampling of distributed traces how frequently you're doing that. There's a, just like, that kind of stands out as being a resource level depending upon how frequently you're sampling. Anyway, point is back to the architecture of measuring as a tool deployed locally and connect up the adapters to your service mesh, point it, either have it deploy a sample app for you or point it to your apps endpoint, tell it to generate load, send back the results. I won't steal. My apology. I just want to ask a simple question here. How do you define the service and give me a two one or two examples? So how do you next and what do you end up with it? Mehmet, I heard your first question, which was how do you define a service and... Yes, what? Yes. And that answer there being your application. So if you're at the endpoint assuming that's, you're broadly assuming that's each thing you mean. Okay, so then application, you call it a services. So then you say, if I have two services, then I basically, I don't know what mesh means. If you have two applications, what is the mesh and two application means? That... Well, I don't know how much time you spend. Hope you can steer me as I go to answer your questions. Various folks have spent different amounts of time around service mentioned than others, but in the different styles of deployment. But more or less, if you consider that you have a workload today, that you've got an application that you're running. And if it has an HVP, it has an HVP endpoint or multiple endpoints that people will interface with, whether that's REST API or a web-based interface, that in the land of microservices, you ideally need a bit more new tooling control around the way in which traffic flows through them. Maybe you enforce circuit breaking or you want to get some visibility. I'd like kind of just like top-level service metrics for... Maybe the key point is that mesh is just a buzzword. Sorry, it's Brian buffing in here. So, you know, if you can't quite see why the word mesh is appropriate, that's fine. It's just a word that the people doing the stuff have picked to describe what they're doing. And so Istio does service mesh. Service mesh is what Istio does. Well, I mean, is it a load balancing? Is it a resource allocation? What is the mesh? I mean... It routes calls to endpoints, to implementations of the service, and it can do that under a sophisticated set of rules for reasons like load balancing or canary deployment, or you want your calls to go to the same data center for preference or a hundred other... Yeah, it's all about callers finding colleagues. But then it means really controlling the applications. At least you have from, I don't like to say the right point, from one... You have a control of the applications from one place. And I guess you can... So it seems like, is that what mesh is? The mesh, these things like Istio intercepts the calls and redirects them transparently to the application. And that is why you would want to measure the overhead. So intercepts and end this series. This is a very good diagram, by the way. Yeah, I'll put a link to this gap because I think momentarily you've got some fantastic questions. I will do that sort of for very basic questions, but I appreciate for the link. No, absolutely. If I could figure out how to get out of this. Very good. So anyway, with that, just it's... Let's jump into it again. I think that'll clarify a few things. So with that, I'll hand it over to Garish. So you can kind of show me how to spin up the tool, take it for a whirl. Hey, guys. Can you guys hear me well? Yeah. Okay, awesome. So to start with, I know I'm starting off with the blank terminal, but just to get started with the tool, we have the instructions posted on layer 5.io slash meshery. I'll be bringing up that site once I share the browser. But before I went to the browser, I just wanted to show you guys how to very quickly bring the services up. So right now, I am in a folder where I have actually cloned the meshery repo. Oh, in this repo, I do actually have a Docker compose file. So now obviously with Docker compose, so the simplest thing to do is, you just have to actually do a Docker compose up. If you want to run it in the background, you can actually add a hyphen b switch, but I'm just going to let it run in the foreground. Now, based on, you know, or based on a lease demo where he actually showed the architecture. So right now the meshery itself, like on the local, will actually consist of three components, three services. So when I actually do a Docker compose up, you'll actually see all the three components come up. So the first one there is actually the four IO container that's, which is our load generator. The next one is the Istio adapter for meshery, which was created by us. Of course, like, you know, so they will be more in future, like what we mentioned, like, you know, we are working with all the other vendors. And the last but not the least is actually the meshery service itself. So you can actually see all the three running and like, you know, waiting. So I'm just going to switch over to my browser. I hope you guys can actually see my browser now. Can you guys see my browser? Yeah. Excellent. So, so this is actually the page I was referring to. I think we also has shared the link to this page. So there was a section, like, you know, which talks about running meshery. So essentially you just have to clone the repo and then do a Docker compose up, like what I've mentioned. Once you do that, like, you know, the meshery itself runs on port 9081 on your local host. So you can actually go to that. And right when you go there, you'll be redirected to a login page, you know, which is based off of Twitter and our GitHub. So you can actually choose any of your choice. I've already signed into Twitter, so I'm just going to choose Twitter. You'll be asked to authorize the application. We're just seeing the meshery page, we're not seeing it. Oh, I'm sorry. No, thank you. Can you guys see my screen now? Yeah, we can see that now. Okay, awesome. Yeah, I got you. So, yeah, I think when I close the tab or something, like, you know, the thing automatically paused. Okay, cool. So here is the meshery page. Here is the brief instructions on how to run meshery. You just CD into the directory after you get cloned and then do a Docker compose up. And you will see the services running. And then now to access meshery, right now we'll be running on port 981 on your local host. So I'm just going to open that in a separate tab. I really hope that you guys can continue seeing my screen. So once you go to local host 981, you'll see that the page immediately redirects you to a login page. And this is a very simple login page, like, you know, and we have a single sign-on setup with Twitter and our GitHub. You can choose any. So now since I am logged in with Twitter, already in this browser session, like, and I'm just going to continue signing in with Twitter. Now you'll be presented with a screen to authorize the application. You just need to authorize. And like, you know, once you authorize, you'll be taken to the meshery application. Now, again, like, you know, the meshery application contains several sections. Now just one second, like, and I'm just going to have, I'm just going to follow the same thing, like, you know, we have. So right after you log in, you will be presented with a performance page. Of course, you can actually start, you know, hitting a URL, like, you know, and specify some parameters. And, you know, you can hit submit and see the results. But before I go there, like, and I thought, like, and it'll be nice to actually take you through configuring or connecting to Kubernetes from the meshery instance. So I'm going to actually give it a kube.config file. This is a kube.config file for a cluster, which is running in one of our labs. And now this is an admin config file. So there was no context, but if you have multiple contexts in your config file, you can specify the context. And for the mesh adapter location, since all the components are running as part of the Docker compose, I'm just going to give the service name, which is meshery STO and the port, which is 10,000. So meshery and the adapters are communicating over GRPC. So we just need the, you know, the service name and the port. Once you hit submit, it will, so meshery will talk to meshery STO, try to establish a connection with Kubernetes cluster. If it succeeds, you'll be taken to this page where it'll say that meshery is configured. The other thing before I actually go on to the next thing is actually we have, we are facilitating connection with Grafana so that other than seeing client-side metrics, you can also, you know, pull in pre-configured panels from Grafana into the meshery UI and see all the results side-by-side. So I have a Grafana instance, again, like, you know, running on my cluster. I'm just going to give the URL. And of course, like, and if you have a API key configured, like, you can actually give that, but mine is an unsecured instance. So I'm just going to give the base URL and you just have to hit submit. And you'll see the, you know, the connection to Grafana was successful. And once it's successful, it'll actually pull in some information, like the boards available, the panels available, like, and all the metadata from Grafana. So what this is for is that you can actually pick a board and if the board has any template variables, you can actually pick any from there. And now by default, we have all the panels selected, but you can actually deselect any of the panels. Like, for example, I can deselect that. And then once you hit add, the page will actually present the board and the panels from that board that were selected. And right beneath, like, you know, you will be able to see the panels right away. The same way you're not restricted to one board, you can actually add more. For the sake of simplicity, I'll choose another small one. So right now you can see I have two boards. And again, like, you know, so they're actually presented in an expansion panel. So, you know, as you scroll, like, you know, the data is loaded. And again, like, you know, I have the time filters. Pretty much we try to keep the Grafana experience here. And you can actually filter based on, like, you know, what do you need? So I'll just leave it at last five minutes. Now that we have configured Kubernetes, configured Grafana, I'll take you guys back to the play page, where now that, like, you know, we are connected to Kubernetes, this tool actually lets you run commands on Kubernetes cluster directly from the UI. The other thing we've done for Istio again, and on this page, all the operations that are listed are actually specific to a mesh. They're served by the adapters. So one of the interesting operations is actually running Istio-Wet, which is actually a tool for validating the configuration on the cluster. So we have that enabled. So, you know, what I'm gonna do is I'm gonna actually select that operation and hit submit. This will actually run Istio-Wet command, connect to the cluster and get the data from the cluster and populate the notifications on the right top. So, you can see the operation succeeded and you can see the count for the operations, like an increment. Just give me one second. Yep, so, and then, like, you know, when you click on it, like, you know, you'll actually see all the operations, like, in all the response from the Istio-Wet command coming up here. You can see all the letters successfully ran and there was one error. So if you click on any, like, you know, you'll actually be taken to the details and it'll actually show what the error was or what the details were. And you can either close them or you can actually dismiss them from here, which will actually remove it from the list. Also, you can actually remove it from here. So just ease of operation. So the same way you can actually run some other commands, which, I mean, the measured adapters, like, you know, will actually facilitate. Apart from that, we also provide a capability to actually run custom YAMLs against your cluster. So for the sake of this demo, like, you know, I have a very simple YAML and this is actually Istio command. But again, like, you know, the Istio adapter is kind of very general. Like, you know, it actually uses a dynamic client. So you can actually use any Kubernetes construct, like, you know, which is valid on your cluster, like including CRDs or anything. So here is like, you know, one example. Now I'm going to choose this and I'm going to hit submit. And, you know, this will be instantaneously applied on the cluster. If I reapply this, it'll update it, which is a nice thing. If I want to delete it, I just flip the flag and then like, I hit submit. And so this way you can actually configure the mesh as well as a Kubernetes cluster from the UI. Now, once you're configured and have the mesh configured, like, you know, to the way you want it to be configured, you can actually come back to the performance page where you can actually, you know, conduct the performance test. Now that Grafana is configured, all the charts, like, you know, which were configured in the previous screen will actually be presented here. Now, for running the performance test, I actually have a canonical Bookinfo app that's running on my cluster here. So it's a canonical product page app for Istio. So I'm just going to use the same URL here. And I'm not going to change the default. I'm going to leave it as it is and I'm going to hit submit. So right now, since it's running for a duration of one minute, like, you know, so I have this countdown timer show up for a minute. Once this is done, the results from the Forai Oran will actually be populated in the graph right beneath. And while we are also observing the chart, we can also observe the metrics from the cluster because we have Grafana configured. It's essentially what it will be showing in the next 30 seconds. Sorry. But essentially, I mean, like, you know, we are trying to improve the user experience and to give some background information, the UI is built off of React and we're using Next.js as a framework. And like Lee mentioned, the backend code is, backend servers are all written and go. Both the main meshry and the Istio adapters are written and go. And like I said, like I said, they are communicating or GRPC. The previous wet command, which I showed is actually streaming data to the UI using GRPC streaming and using service sentiments. So now that we have the results for the test, you can actually scroll down and also see the results from the cluster side, which is because these results are actually from the client side, but the Grafana charts are actually feeding off of Prometheus, which is on the cluster. So you can actually compare the results of the client side versus the service side, like right from here in the same user interface. So you can actually add more and more Grafana charts like in order to make your quest for searching for something like a much more meaningful. So that's the overall user experience. Now there's one other thing. Now anytime a test is run, you're also giving the capability to persist the results. So right now they're actually persisted on AWS. So now all your previous tests are persisted. So which means like in my case, like I have used this tool five times before and so these are the results from my previous test runs. So from this interface, you can actually expand and see the charts for each of the individual runs. This is actually the run, which I ran like two minutes back. This was the one like I ran like an hour back or so and so on and so forth. You can expand and leave them and compare them this way, if you have a bigger screen, like you should be able to compare them well. If not, I also have another way to compare. So you can click on one and click the compare selected feature. If you compare multiple, then the charts will update accordingly and you can clearly see the distinction for more than for three or more charts. Again, like the experience will be slightly different. So this is another way where not just comparing or viewing the results for the most recent run, but also you can actually compare it against like in a previous runs you have. Unfortunately, I only have five, but this feature also, I mean, we also implemented pagination and selection across pages. So if you have a selected result one from the first page and result four from the second page, you still will be able to compare them. So that's pretty much where Meshory is today. And like I said, we are still working on it. It's about like two months old since we started. So this is the current state of Meshory. So the view that we're on right now is probably the one that we're able to compare performance results. Certainly excited the students at UT Austin in its ability to help facilitate their research. And now we're hoping that it will also illuminate or just answer those questions for doctors, for people looking to understand the overhead of the Mesh. There needs to be a bit more, a couple of other things I think to help facilitate this understanding. That is sort of more out of the box tests, helping the, some of that are out of the box that when multiple people run and those results are gathered that an anonymous report can be shared back that would tell people, basically like a speed test that you might test your internet connectivity. It would tell you, you've got a fast mesh or a slow mesh or relative to this. There's a lot of variables in there but hopefully it would begin to provide the more people that use it, the more value it might provide. Part of our thinking there as UT Austin goes to do testing in their HPC environment that the CNC infrastructure labs or maybe underutilized and so maybe we would go ideally in combination with the other Mesh projects, the vendors and their projects, go run some tests inside the CNC lab. Part of that as you guys go to ask some questions here, I think that the last thing to tie off with was and Gareesh, I'm gonna try and try to grab a ball from you if you stop sharing is, it's the facilitation of an apples to apples comparison to the extent that that's possible. Brian and Ken, you guys are familiar with, I mean, all of the variables that are here, it's what type of beam are you running in your clusters? What size of a, how big are your clusters? Is there other activity? Is it in the vacuum? How many requests are stuck in your send me for how long, how many services are you getting on all these variables? To the extent that that can be documented in something of a vendor neutral benchmark spec, there's the start of one, that's just a simple document that describes that captures the environment details, the environment in which you're gonna perform the test, the configuration details of your mesh, because some missions come with lots of different ways to configure them and run them, which affects performance, right? I was calling out distributed tracing and overhead of sampling 100% or 1% being dramatically different, so mesh configuration matters. And then a spec to also capture the type of test you're gonna run, I just said, gave an example of like, sending 10,000 requests per second or sending one, or how many can currently all these variables, that when you'd be able to describe it in the spec and that would be shared part and parcel to the results would also be the spec that describes the environment. So we've got our fingers crossed that that's helpful to people as well. No, I think this is fascinating really. This is really fascinating. I will look at this back a little bit more and see what's going on, but it looks really fascinating. Thanks for that, my welcome. Yeah, do your worst. It'd be great to, yeah, it'd be great to have you poke around, asking questions, making a mess, it'd be great. My observation, I'd like to see it, compared against the baseline line with no mesh. Yeah. Yeah, like, hey, here's your, and the tool right now allows you to do that. It doesn't necessarily highlight that or facilitate for it well. Yeah, but in terms of your demo, you brought up a screen which had five runs of Istio, which probably ought to have been about the same. I would expect if you compared it against the baseline, then you see some real difference in the latency. Yeah, great point, Brian. Yeah, to have it both just in the demo itself as we're walking people through it out, I'm understanding, it's probably one of the more prominent questions that people have. Here's my performance off the mesh, here it is on the mesh. And yes, there's a bunch of other ways to, other variables, but just about that. Simplistically, that high level answer is very exciting. Yeah. Taking a note. Yeah, this is very interesting and I'm excited about the work you're doing here. Yeah, thanks for the great time. It's been good feedback. Actually, this is the first time outside of the community calls that the project's being presented. So, but there seems to be interest. We're trying to see as many cycles as we can. Yeah, definitely interest for sure. But thanks for the time today, Lee and Grish. Thanks for presenting the demo. And I will be in touch with you guys shortly about what the traffic signal that looks like and what things we want to propose to it. Nice. Yeah, very good. Brian, if you've got any other critical feedback, the more critical, the better. I mean, I think it's a good idea. I think a lot of your advanced ideas should work, but I think in the meantime, start with simple stuff. And I mean, maybe do a bunch of benchmarking of some kind of traditional demo app. None come to mind, but I'm sure you can... Is there a sock shop out there somewhere, Brian? Yeah, that's true. That would be one potentially. I mean, it's sort of intentionally built way too complicated. But yeah, if you can make a demo out of that, then that's great, because it's got everything. Thanks, everyone. Have a great rest of your days. Much appreciated, Ken. Thank you, guys.