 Hello. Hello. Okay. Hello. Hello. Hi there. How are you, Lee? Good. Hey, hey, Neil. Good, good. Oh, just finishing up lunch, actually. I'm sure you're most interested in what that is. It's, it's wontons, which have the effect of leaving, leaving me with some bad breath, which means any amount of affection with my wife is out of the question for the next day or two. So, now that now that we're on public record about my affections with my wife. Thankfully, she doesn't, she doesn't watch these. So nice. Bear with me one moment. There's a few folks that are messaging, looking to join. Oh, nice. Hey, there's auto. Very good. Hey, hey. There he is. Hey, did you, did you make the big transition, like the big sort of role change? Is that a public, publicly talkable thing? Yeah, it happens. Yeah, good. Yeah, they don't, yep. Boy, I feel, I feel kind of awkward saying this, but it's, maybe I said it to you before. I'm not sure. I know a lot of red headers and there's something about their talent acquisition team where they just, they really hit it like close to the mark very frequently with quite intelligent people who are genuine and open. They have time for you. They want to engage and share. It's, it's, it's, that obviously doesn't apply to you, but I'm just saying that, you know, you've landed in a good spot. So. Oh, thanks. Yeah. Be clear. Just seeing your reaction to that is telling the care. Cool. Well, so in the, in the zoom chat, I posted a link to today's meeting minutes. So we're three after. And so now we're four after, so let's, let's, let's get going. Yeah. I'll share the minutes and we will kick off today's. SIG network meeting. So it's January 21st. I think this is the second meeting of the year. So welcome to 2020. 2021. Whoa, that's kind of what are we, we're. What are we? 2021 21 is that the. We don't have quite the trifecta, but we're, we're close on the date. Yeah. It's the 21st day of the 21st year of the 21st century. I guess. Hey, there it is. Yeah. Okay. I knew there was something special. So very good. So we've got. A niche here with us as well. Mr. Lima. Mr. Otto, who's it's not, it's not Vandercamp like this. Yeah. Yeah. Yeah. Correct me. Good. If, if you're also on the call, everybody should have access to the notes. So please fill them in. I was saying this is a SIG network. CNCF SIG network call. We meet twice a month. Every first and third Thursday. Our meetings are public. They're recorded. They're posted on YouTube. The, the, let me, there's a few folks that are on the call for the first time today. So welcome. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. We might do a little bit of, since there's a smaller size of us today, or at least so far, we might do a little bit of introductions. I'll probably be nice. The CNCF SIG network, just for those that are, haven't been round. For a long time, unlike, unlike Nikolai who has been around for a long time. The CNCF SIG network has. The CNCF SIG network. The CNCF SIG network is a private home base for. Any networking and traffic related. Project within the CNCF. So linker D and GRPC. Nats. And there's a long list of service mesh interface network service mesh. I'm going to do a disservice to all of the other ones that we didn't mention. We have had. Those topics first. And then move into our, our working group topics. And the working group. Today is what I expect we'll spend most of our time on. It's where we've got a few different work streams. And so. Before we get into that working group, and it's kind of charter and what we're doing within there. The working group itself is a subgroup. Of CNCF SIG network. Some of these particulars. Really aren't. Either here nor there, but I'm, I'm mentioning them for, for clarity. Cause we're going to talk a lot today about service mesh things a lot today about. Nighthawk and then low generation things. And, but that's not all of what SIG network focuses on. So our topic for SIG network has, hasn't changed since last we met, which is to acknowledge that. The ambassador proxy. Are the ambassador project based on envoy proxy is. And it has been submitted for donation to the CNCF. It's been submitted at an incubation level. So there's. Sandbox incubation and graduated levels in the CNCF in terms of. Measuring the maturity of a given project. It's adoption, et cetera. Speaking of Kuma Kuma is. At an eight. At a sandbox level, but I suspect Nikolai. I don't know. I don't know. I think, I think probably hinting toward, or maybe thinking about that next step. I'm soon. Yeah. Yeah. Yeah. I'm sorry. My camera is not working after some of the meetings today. I need to reboot, but I found that I met. Yes. Yes. We are definitely looking. So. I'm. One of the. The people of the maintainers there on this. Service mesh. I'd like to refer to it as like. And boy control plane. Let's say, but. So we're implementing the service mesh ideas there, but yes, we are a sandboxing project for. Since June. Or like late June or early July last year, maybe late June. And yeah, we are, we are very, very much looking into. Gathering the needed. Mostly. Like case studies or how would you call it? Success stories. Of. The futures. To actually be able to qualify. For the incubation. There's also a number of other. Things there. This is the major. Majority of a project. Sorry. No, no, sorry. Yeah. So you have to case studies, user stories in preparation for incubation. That makes a lot of sense. Yeah. So. Yes, yes. It's interesting how, how you can. I joined like a 10 months ago, nine, 10 months ago. And it was pretty young project by then. And since then you can literally see people getting, you know, of course, it's interesting how the profile of the people that come. So first you get some kind of explorers, people that just go there to poke a little bit, send some feedback and then disappear. And now you get people that stick a lot or like come and start contributing directly. So it's an interesting experience for the full lifetime of open source projects. There's a blog post or something to write in there somewhere about the journey of an open source project and the community members, explorers, pioneers. That's funny. Nice, okay. Well, good. Well, I don't have any further update on ambassador. It's just at least in as much as I'm aware, just out there for review. And so do go out and everyone's encouraged to comment. Anyone that's on this call or not even on this call just you are most welcome too. So there's a few folks who are here today that have been working on some of the service mesh working group initiatives of which there are about three. I won't bore you with slides for long. This is just intended to be a recap and introduction for folks about what I was just articulating before the fact that there's CNCF SIG network, sort of its mission statement and things that so I'm gonna touch on the slides. We're not gonna cover them. There's some other co-chairs here. Matt Klein is our, well, at least was formerly or still is our TOC liaison. So I know that I gotta go look that up. We've got some, so this is a dated as of this last KubeCon so that we've got some projects that are on the horizon. Ambassador is coming on in. As a sub working group of the SIG network is a service mesh working group. Some of the, just briefly, its initiatives include a collection of service mesh patterns curating a list of those which patterns are sometimes considered best practices or at least described architecturally for people to follow. And so we haven't spent a lot of time on this call. I'm going through those. But if the link to this slide is, this deck is in our meeting minutes and so to then is the link to the full list of patterns that are being described and articulated. So they'll be, I anticipate they'll be, there's a number of, so if you can't tell, I've been, for many of you, I've been trying to corral us into getting a lot of our conversations into this Zoom or this meeting channel. And because there's a lot of work going on, I don't know that we characterize it as behind the scenes, but there's just been a lot of work going on in these various initiatives and we're trying to organize those here. So another one of those is service mesh interface conformance. Some of that is driven from the SMI meetings, but those are 30 minutes long every two weeks and there's a lot of service meshes to coordinate with. And so there's a work that goes on outside of it, service mesh performance, this specification we'll probably talk about a little bit later today. Some of the individuals and a university that is working on Meshmark, they are not on the call today. So we'll talk about that. Instead, Nighthawk and get Nighthawk as a project is where we wanna drive into some particulars. So people don't need to listen to me speak the whole time. Auto on the call is probably a core Nighthawk maintainer or the core Nighthawk maintainer. Auto, do you wanna introduce Nighthawk to folks? Oh, you know what? You're on mute. Okay, sure, he'll come back. And so... All right. All right, is this better? There he is, yep, that's very good. So apparently Zoom doesn't like my headphones. Yeah, so in any case, Nighthawk. Nighthawk is a layer seven performance characterization tool that basically comes with a couple of utilities. One of them is a client, a CLI, to synthesize low... I don't see what's going on with Besselin, can Swith, Apple, or your telet is one of them? They're Siri. Siri joining in. Oh boy. So there's a CLI, which allows you to control low generation. It comes with a GRPC service. You can use the CLI to control that or program something yourself to steer it. And you can also use that to drive low generation. And then there's a test server that comes with it, which is based, well, actually everything is based on invoice libraries. And well, in short, I guess that's it. And then there's obviously the why Nighthawk because there's a bunch of low generators out there. And one thing that we've been trying to make Nighthawk shine in is being like super sensitive in, well, measuring latency is very fine-grained. So the target was 50 microseconds of precision. And then there's also like multi-protocol support in there. So H1, H2, and well, yeah, I can go on for a while and you know about all the features it has, but I think like the sensitivity is like a key thing. So that's kind of like a very short introduction, I guess. Oh, thank you. So it's actually microseconds. It's not milliseconds. Yeah. Do we have the, yeah, no, we've managed to achieve that, but that also imposes, well, it always goes probably when you're measuring latencies that the systems in the environment in which you do that, that you control the noisiness there. But I think that's, that's again, irrespective of which tool you use. And so, yeah, and so there's part of the problem statement that Nighthawk is aimed at solving and has been solving. Nighthawk, from my vantage point, has been growing in popularity and those that have been using some other load generators have also, you know, are also turning their eye to Nighthawk. Like it's compelling enough that there are people looking at switching off of some of their load generators to Nighthawk. There are some other compelling aspects to Nighthawk that maybe Otto, you could speak to as well. So the adaptive load controller, the horizontal scaling of Nighthawk, those, yeah. So, yeah, there's quite a bit of features around these days. So one of them is like the adaptive load control that was contributed by Google fairly, well, not too long ago. And with that adaptive load controller, you can, well, research questions like, what QPS can I sustain given that P90 stays below a certain threshold? So it will then automatically like try different RPSs and converge towards a certain frequency and then attempt to sustain that. And obviously that's just a sample because it's, well, the principle of it is fairly generic so you can iterate on some other things as well. That's all extensible and applicable, but this is kind of like the primary use case that it was built for. And then there's, well, something I've been working on myself that's like horizontal scalability. So when you try to scale a bunch of load generators in a, then there's a couple of, well, challenges that arise. I think two big ones are keeping these clients synchronized. So, you know, if you want to achieve like a certain global request frequency, well, we try to make that easy and accurate. And then the other part is like collecting all the results and presenting them, aggregating them in a way that's, that makes sense. The third challenge is abstracting away from all that. So, you know, I think ultimately it would be super cool if you could run a horizontally scaled remote execution, well, basically by specifying one or two flags and that would like be the difference when using the CLI to execute a local test and a horizontally scale test. So basically that means that if you deploy the right services to a couple of nodes and you, that then you can, well, easily orchestrate those and make them work together to send the load somewhere. And then you'll just get the results out as if you were running a local test. So that makes deployment of the thing easy. So, yeah, yeah, those two things are, well, I guess that are interesting developments. So, I'm trying to remember, did you mention something, did you call out something else for me to dive into a little deeper or? Those two are the ones that continually pop to my mind as being like, you know, really intriguing and coming from spending a lot of time on the MESHRI project, those to enable the enable a tool like MESHRI or our users to answer a bunch of questions that we've, that the community has had sort of sitting out there latent, right? Things that are hard to answer, just like... So one thing that that's also, I think nice to mention is that if, you know, I'm describing like the adaptive load control and the horizontal scaling, but because of the abstraction that's like abstracting away from all that the adaptive load controller can also talk towards that horizontally scale system and well, basically barely be aware of it, you know, doing something that's actually running remotely. Yeah, so to me, that type of a capability opens up the ability to answer questions that my little brain has yet to ask. Like I think you can go like performance characterization or describing Nighthawk as a layer seven performance characterization tool is exactly what you had said. And that's exactly what I think of here is like the high fidelity way, the new fidelity ways in which you can characterize the performance of your environments. It's like I don't, I mean, there's one other low generator that comes to mind that isn't nearly as sophisticated, but has some amount of horizontal distribution. And I want to say it's like octopus or like that's not the name of it. I can't remember it, I wish I could because I'll go on record that that maintainer is not a friendly individual, not a welcoming and collaborative individual. Moreover, the project isn't as so, anyway, so super pleased about the discussions that we've been having, some of the work that's been going on auto with you and Jacob and Hutch and just the feedback that's been gotten. There's a number of other folks on the call today who have, who broadly participate in the service mesh community, participate in and around some of layer fives initiatives, one being meshery and service mesh performance and some of these other things. They've been, well, my wife hates it when I say this, but they've been hot to trot on this disinitiative. They've been, I think excited to see, well, to answer some of those same questions. I mean, part of the mission of a tool like meshery is to just, is to make service mesh, to help people adopt them and do it a little easier and answer questions like, what should I expect? What type of, the exact question that you just raised, which is like, if our requirement is, we need to stay under, if we have this SLA, this SLO, how do we stay within that given the fact that we have this, we consistently have this QPS or like, what are those inflection points for us? When do we trigger, when do we, and that's just one of any number of, I think other questions that could potentially be answered or at least characterized much more problem statements could be characterized much more fully to be able to more intelligently to give people a bunch more info that they would need to run their systems better. Yeah, it could have been locust, Adina is a good call, right? If the subtitle of that maintainer is douche, then that's probably the one, I don't know. Anyway, bad jokes. So there's a project that's coming forth here. Get Nighthawk that has been pleasantly, warmly received. It's the notion that Nighthawk's been growing in popularity and we, and, but there's only auto, there's, there's only a, there's a single distribution artifact that's available for Nighthawk. Is that an inaccurate statement? Like are there, is there a, that's an accurate statement. There are actually, actually they are too, but they are similar. So right now we push Docker images to Docker app and that's it. Yeah. So this initiative I get Nighthawk is to, well, uplift Nighthawk and get into other people's hands to spend some more time with auto and the other maintainers of Nighthawk to help, help and take advantage of, from a measuring perspective, help and take advantage of Nighthawk's capabilities and expose those to more folks in different ways. So there's a couple of early, so we actually covered this topic a bit last time we met. So this is why I'm sort of skipping through a little bit of this and that's to say let's dispense with the pleasantries and let Robert meet the road on some things. Like there's a few, there's a few folks that aren't able to make it today, by the way, three of them will be rewatching this, but they're raring to go. And so let me introduce a couple and that's some, so Pratya Banerjee also goes by Neil, which is easier for some of us. Neil, you're on the call. Do you wanna say hi and talk about kind of your, where you're looking to make a mark on the project? Yeah, so my name is Neil and I'm from India. So I want to like, as I said to Lee, I want to contribute to the Nighthawk project and initially for the first part, I want to make the site up and running as soon as possible. And yeah, that's for now. So thank you Neil, to Neil has done this success, by the way, Neil is almost, well, Neil's a maintainer of the Service Mesh Performance website as well. He's quite familiar with these types of websites, moreover ones that are right within the realm of what we're trying to do. Ultimately, part of what I'm hoping that we'll accomplish with Get Nighthawk and these initiatives is a few different things. It's a little bit of potentially some compatibility with Service Mesh Performance, as you go to inform Nighthawk of a load that you would like, of a load that you would like for it to generate and the ways in which you would like for it to do that. Nighthawk has its own mechanism for doing that. SMP, the Service Mesh Performance is coming forth, hopefully as a standard is a strong word, but just coming forth as a specification for doing that consistently. And so there's discussions to be had around that and where or how that might happen. Wow, I'm digressing. And there's a lot of different things to connect here. What I was gonna say is very pleased that Neil is here. He's done this before for another relevant site. There's this link to Figma is in the bottom of the Get Nighthawk project. So hopefully everyone can access it. Hopefully you're able to comment on it too, to the extent that I'll put a link into the chat. Please comment. The designer here, he's not on the call right now. His name's Augustine. A lot of different people coming to bear on what sort of on the surface of it looks like a small project. But my hope is that it's not, that it ends up being, ends up popularizing Nighthawk's capabilities a bit, enabling people with a few different things that they couldn't otherwise do. There are some quite pleased that Otto's been so warmly engaging in particular, in part because there's some suggestions being made about, but like this logo here, this is just a draft of what could potentially come to be. Not necessarily, not that that is Nighthawk's logo, but to the extent that Nighthawk doesn't have another site than something Otto for you to kind of think about and internalize or maybe how closely you'd like for, this set of work to kind of embody Nighthawk directly versus sort of sit on the side of it. And so Neil, you had put together, and all of this is up for comment, that's why we're walking through it, is nearly put together a project site in its purpose, some sections that it would have sort of, to try to indicate how much content would be on there and the scope of it. And so structure, some designs, an early domain has been registered. I wouldn't click on the link right now because it's just an early design, it's just totally under construction, very good. If I'm not that wrong, there is another design of the Nighthawk logo in the Figma file, can you open it? Yeah, nice. Yeah, just above, yeah, this one. Yeah. Yeah. So I did this one and I did the other one. So anyone can say anything that they want about it without fear of hurting feelings. So this was sort of inspired from the fact that Nighthawk is a bird. Yeah. Sort of looks like a bird anyway. Yeah, so personally I liked that second one, so maybe we should raise a vote on some slack channel on what's the best option is. Surely, maybe it wouldn't hurt to have a third as well. I only chuckled to say like, hey, I think for my part, if I was in auto shoes, I would, it's kind of a sensitive thing. I remember how the name Nighthawk came to be and that's actually, that was quite a bit of bike shedding and well, in the end, we just raised a vote and it ended up being it. Nice, okay, very good. I will take a, and so by the way, Adina is another individual on the call who's been engaged with these projects. She's been helping with continuous integration on, in the Mechery project and thankfully she's also intrigued by Get Nighthawk, which has a lot to do with part of its initial challenges around continuous integration and producing distributions of Nighthawk. So maybe we switch to that topic because that's a bit of where the rubber meets the road as well is guidance auto for people for, there's a few contributors who are looking to spend time getting fancy inside of GitHub Actions inside of the workflows there and you getting familiar with Envoy's, well with Nighthawk's tool chain, the sort of the basil and the whole, where, how do these folks get ramped or where do they go to look for the current build process like what gotchas or caveats should they watch out for? Yeah, so, to be honest, it's a little bit more involved still than I'd like it to be because there's kind of like we piggyback on Envoy and that, well, building through like Docker images that's easy of course, but like preparing your own environment to do the same that's, well, then you've got to be pretty pedantic about the specific build needs of the project and that may require a bit of tinkering. And once you've gone through that, you'll also find that the project is, it's not like a very small build, it takes quite a bit of time or maybe even half of the battery of my laptop. So it's a significant build. So, yeah, so that's, you know, if it's possible, I would actually consume the Docker images that get pushed, but if it's necessary, then, yeah, building is possible. I'd start out with the README and at some point that punch you towards Envoy's README for building the docs because basically the requirements are exactly the same. And from there, you know, once that's set, yeah, you should be set to go. And so, so Adina and Anish is here as well and actually, Rangananth, thanks for coming. Thanks for joining. Yeah, hi. Yeah, so actually, yeah, I go with Tsungku, more or less, nobody knows me as Rangananth, but yeah. Oh, oh, sorry. Good to be here. That's a, it's... Yeah, no, so my name, it's kind of messed up, but it's okay. No, yeah. See, the thing is, is I sit high and mighty with nobody ever, nobody ever asked me how to pronounce my name. It's this easy one. Good, yeah, actually, thanks for sharing this. So right now, I'm looking into my docs. I'll come back to you with certain questions about how it's composed and the components of how it's testing. So right now I've been doing analysis with FortiO, so Nighthawk is kind of the next one that we'll be working on. So, yeah, soon I'll have some feedback. Right. Very good. Yeah, Ken, it will be, so from what I started and not continue, it was that I need to build Envoy first if I want to build the Nighthawk. No, no, there's no need to build Envoy itself first, but you need to get the... They call the steps from the workflows from there. Yeah, yeah, the built prerequisites are the same. And another thing, the, let's call it like a base image. The base image that should be used is the Ubuntu 18, if it will be a Ubuntu distribution, or we can go with a higher one. Actually, I just answered to my question. No, we need specific GCC libraries from other, I think. So, yeah, that was the thing. If I want to build for the Ubuntu distribution, what should be the base image, the version that I should take? Or for Debian, doesn't matter? I think this is kind of like a... So first, I don't know from the top of my head, so I should check what's the oldest version that Ubuntu, or the oldest Ubuntu version that can be used to build a thing on. And then I've picked the oldest one. Yeah, and then I've picked the oldest one and hoping that the result in binaries will be compatible on all the newer releases. Okay, other than that, the OS requirements, let's say the library dependencies are the one from the Envoy. Yeah. For the building Envoy. Yeah, that's right. Thanks. Thank you. And also, you know, because I don't have like every requirement on the top of my head because Envoy is like a fast-moving target, in a sense, that things change at a rather high pace. Yeah, that was the question of top of... I was thinking like, put a question where you're having some blocks and so on, because I don't have my own laptop with me. It's broken now, so... Right, right. Now, like if you have any questions, you know, when you actually start cracking on this and run into anything, just feel free to ping me on Slack or some such. Yeah, so I will try it myself, but I think I will need like some assistance to get the workflow done. Sure. So knowing about these issues is also valuable because hopefully, you know, we can then update or add to read me somewhere to help people coming after us. Yeah. Actually, what will help me to have more confidence will be if at one point you have time or so the current workflow, like it is... I don't know exactly now what was the bottleneck, but there was a point where I did not get... How the envoy gets the build, is getting build it. Build. I mean, it's like, let's say you run the build and then you have the console of the build and you see all the things that happened. I didn't saw it or... Actually, yeah, this is... If you have a build of Nighthawk, doesn't matter. Which version doesn't matter, which operating system or and so on, but it's like the CI workflow that they have it on the envoy. Not necessarily from the Nighthawk, also from the envoy would be enough. Yeah. It's kind of sound like I want the internet in a file, sorry. No, but I think I was missing something and if I will have like an output of what is executing, then... So maybe it's good to iterate on that then offline. If you can reproduce that issue you ran into and copy paste it to me, then maybe I can help them. Yep. And offline, where should I or how would I contact you not to be a spam or something? Well, I'm in the Layer 5 Slack channel. So maybe over there, my Nick is... Let me paste it. Something with all the property. Yeah. Yeah, that's my handle in the chat right there. Okay. Thank you. But I don't think it's gonna be today or tomorrow, so... Oh, that's fine. Whenever you're up for it. Quick confirmation, the current Nighthawk workflow that's in CircleCI is used for that. So this is the current build workflow? Yes, that's right. Okay. But that's piggybacks on the build image from Envoy, right? So that makes things easier because it has all the prerequisites already there. Nice, okay. Maybe to... So good, so we talked about where the site designs are or the site structure, the site designs, sort of draft logos, draft designs. Everyone here is welcome to assert opinions and we'll do auto, a great suggestion on a poll and voting and things. Rudolfo Martinez is another individual who will hopefully collaborate with Adina and make some waves around CI. So he isn't able to join today. He's over at Rackspace, actually. Okay. There... So the state of, so speaking of Sun Koo of the... Speaking of using Forteo and trying out Nighthawk, the current state of Nighthawk support in Mechery as these projects come together is, well, is in part what sort of initially was driving this effort. And that is that it's been convenient for a Golang-based project like Mechery to be able to use Forteo as a Golang-based utility and basically as a library in that respect. It Mechery provides a Golang wrapper, if you will, around WRK2, which I don't know if that's C or C++ or what, but it's not go. And it's one of the Cs. And so... And so that was the original approach taken, or it is sort of the original and current approach taken to integrating Mechery with Nighthawk. And that has been to wrap some Golang around Nighthawk's CLI, or Nighthawk's command line interface. And that's the difficulty of kind of getting... All right, running is separate. So it's sometimes ideal that that might run separately in a different container because to what Otto had described before, that taking and distributing, literating a cluster with multiple instances of Nighthawk is probably easily done using a container and scheduling a container in a Kubernetes cluster. But it's not the only... But it's also highly convenient for a tool like Mechery to have Nighthawk built in or within the same container or available there. And so I don't know what I'm trying to say. I guess I'm trying to switch. I'm sort of saying, hey, we covered these topics. Now, one of the other topics is the horizontal distribution and Nighthawk's support for that. So Otto, earlier you described what that capability is, but the current state of that capability is in-flight available. No, it is in-flight. I think I've like everything ready for it, except one challenge and that is that. So currently what it's able to, it hasn't landed yet. So this needs to go through review still, but the current state of my working branch of that is that it's able to collect, all the outputs, but the outputs they come in streaming because we're also preparing for, well aggregating like the raw or high fidelity results. So not aggregated results. So what I'm trying to say here is that that there is one slightly challenging part about it and that is that if we're gonna aggregate very large responses, then first all these large outputs needs to be sent in chunks towards a central aggregation point within the cluster, like the horizontally scaled load generating cluster. And that's like the part. And then I'm trying this all. Now having said that, what is working is actually quite a bit and that is that you just can get the unarrogated outputs of all the instances involved. But that's just not super convenient yet because for us humans, that's kind of like a lot to digest. In a sense that, if you have like 200 nodes generating loads, then you'll get 200 results sets in and then you need to go over these to make sense out of them. Ideally we do something with that. And the plan is then to, so we're using HDR histogram as one of the technologies for histograms on the hood and that one is able to merge the statistics. We'll be able to do that with the current state and that's pretty easy. So long-term short, I actually think it's about time that I create a pull request for that and then on a separate track, finish up some stuff that's related towards, well streaming raw statistics, so to speak. Sorry, does that make sense? I'm trying to compute state of this on the fly. You're having to do your own aggregation at that. Yeah, but I actually think I'm able to make a pull request that would like following the AD 100 rule. That would be quite useful as is. And then we'll be needing some time to learn that and I think that that might be, oh, well weeks to months, it will go in tiny parts and it's quite a bit of code. So one of the, sorry, go ahead. No, go ahead. No, I'm saying, no, it just makes sense. So one of the models that we're looking at performance is not self-traffic. In the sense like a focus is on like a telco workload. So from that perspective, it's more measuring from outside the cluster or within the cluster, how much would a node, what's the performance of a node for the microservices that are running on the node? So in that sense, we send in like a gigs of traffic across the node to see how the microservices perform within the node. If not, we're scaling across towards the nodes, but yeah, so that's another model that looking into, see how not self-traffic model works and how these tools are helping us to kind of achieve that type of results. And right now I see for Dio, I mean, there's still things to investigate to see why it behaves that the way it does to a certain, when we scale beyond certain QPS like 10,000 QPS or whatnot. So you have to figure out how NITOC does and how it scales. Is there any comment on as to like, what tool would be good or is NITOC suitable for this type of test? So I'm sorry, I'm still trying to digest. Here, let me toss in a thought. Unless Otto, are you about to give one? No, go ahead. So one of the things he said, Sanku, is hits home from the perspective of tooling to support. It sounded like you were looking at characterizing, or you wanted to make sure that as you are characterizing the performance of large volumes of requests, like of a telcosized environment, that you're doing so in consideration of the impact that generating load, when done from within the cluster, like you don't have a clean scientific, you don't have a clean vacuum, you're dirtying the lab with, which I think is, which is actually why the horizontal distribution capability, horizontal scalability is interesting to me because it's like, to me, all the test cases are valid. Like, are you dirtying your environment if you're generating load and burning some CPU and from within the cluster? Yes, is that valid? Well, I think so. Did you have microservices deployed and spread across your cluster? Yeah, are they talking to each other? Yeah, is one generating load against the next? Yeah, okay. But for in other test situations, it's like, look, what we want to do is pretend that we're a user that all, and we want to generate from user traffic and then we want to have the direction controlled here. All, everything gets generated externally, maybe multiple sources externally, maybe multiple endpoints at the same time, which is another exciting capability of Nighthawk. That, hence, that's been a focus of the Measury project is to deploy, is for Measury to easily deploy outside of a cluster, generate load or use Nighthawk or the other others to generate load or to do it internally and to give people, hopefully easy to use tooling that they can repeat those results within. Sunku, are both of those valid for you? Am I putting words in your mouth when you're saying that you would want to do both and there's certain situation, certain test cases that are appropriate for one versus the other? Yeah, I mean, yeah, you're right. I mean, both are valid surely. I think from a telco deployment standpoint, generally each node might not have a tons of microservices where you want to do east-west across like a tons of microservices within a single node. Most likely have a few important, couple of important CNFs to say, as a container network functions. And I said deployed in probably a microservice fashion. And, sorry, that's why, you know, so the traffic going in and out of a node is crucial. Characterizing that is crucial. At the same time, of course, they are deployed in a cluster fashion. So surely need to understand the performance across these microservices scaled across like few servers, few server nodes. So yeah, both kind of models are surely important. And yes, the key part there is, you know, what kind of network characteristics that in the sense like network parameters that they consider scaling TCP sockets or consider how is the layer three, layer four tuning down in these tools in order to, in order for site cars to kind of process them and deliver the HTTP packages to the actual application. So that's something to consider in leveraging these tools. And part of my effort is to understand that, see how these tools are performing and how would they satisfy their needs or what can be tweaked a little bit to kind of satisfy the telco needs. I'm not sure if I make sense, but yeah. Yeah, to me, it's very clear why you're on this call. Yes, I'm glad that you're on the call, very nice to, there's a lot of, there's not that many people that I've been able to connect with that are trying to study that. And I think that, you know, it becomes, it's more meaningful to study the higher the volumes you have, the more impact tuning of performance has. But so I actually, you know, with respect to what's the internal and external loads generation, I would be like the bigger fan. I think I totally agree with that. Putting like the load generators outside of the test subject. Outside of the test subject, so to speak, makes a lot of sense. But the thing is so far like, well, most of the open source systems that I've seen that actually do this type of testing, they all generated internal in the cluster. And I think that is because it's more convenient because the tooling that's being used, well, it doesn't come with the features to easily do it in another way. So, but from a scientific point of view, when you consider the environment, that's like a pretty important aspect, right? So you wanna, ideally, you have lightly loaded clients and then a totally noise free environment for the test subject. So that's my two P, but I think that it imposes some requirements on the tooling that you're using. And well, hopefully Nighthawk does make that a bit easier with being able to drive like a separate cluster that you could set up for low generation, which then sends like the test workload towards another cluster that you're actually interested in measuring. And then even maybe if that cluster under test could then also have an egress, if you need origins that those reside in another cluster and then that seems to be like a more clean approach to let's also easier to set up. Yeah. Yeah. So go ahead. No, no, go ahead. No, just to say in terms of low generation, right? So traditionally, at least from a telco models, we have RFCs like provided by ITF. For example, Layer 3, RFC 2544 is a popular one and that determines how many frames, how many, like what's the rate, when to back off, right? So when to open sockets or when to, and it's all Layer 3. So what kind of packets, like how to measure these packets and how are the packet drops measured? So all of these are kind of goes per standard and these tools that we're looking at, and I know it's like for Layer 7, but so that's something we need to kind of standardize in my opinion to kind of say, and also what's your TCP scaling algorithm, what, how are you scaling your HTTP packets? Like when can, what kind of codes to return when, right? So that's something I see a difference between these tools and when you're measuring performance, latencies especially, based on the tool you use, the latencies are a different number than, although you're configuring the same environment, right? So then that's not necessarily a standardized representation of what your latency is, right? Unless your whole company uses the same tool for every kind of thing, right? So, and that's, I think that's the gap I need to address. Yeah, so running some benchmarks internally, so yeah, I hope to make some progress there. You have a captive audience here. I'm just gonna, there's some cool, if you don't mind, I've got a request of you and as we go to wrap up today's call, maybe a couple that we can kind of recap some action items if we can. Sure. So I'm gonna, I'll feel completely at leisure, so I'm gonna just throw one your way, which is I'd be really curious for your thoughts, kind of feedback about the concept of Meshmark, which is articulated in a slide here, so I'll put that. But it's also a bit further described on the SMP spec.io site. So here. Otto has offered some thoughts on the subject in the past as well. And we are, so to articulate this really concisely, or just to say, hey, we're looking to pick up this thread and this piece of work and engage in academia to do so. And so we have a couple of different universities with supporting professors to hopefully create an algorithm or define how this should be measured and how it would work. And so I'd be curious for your feedback next time we meet or before next time we meet. I've asked for a mailing list separate from the SIGNetwork mailing list for the service mesh working group, so that as we potentially use that we drive some of our collaboration that we're not spamming the CNI guys with Nighthawk stuff or whatever. So hopefully that's coming forth. That's an action item for me. Neil, I know you're probably moving fairly briskly through iterating on the site designs. I've seen some commits coming through from you. Adina, it sounds like you're gonna go off, read some readme's. We'll probably bring Rudolpho up to speed as well and make an attempt at some builds. And Anish has been here absorbing. So I don't know if he's still on, but yep, he is. Yeah, I'm here. So you're very much in danger of being put to work. So just be. Yeah, I'm just waiting for it. Cool, good. I'm gonna catch up with you just after the call just not to make everybody. But, and Nikolai, I dare not try to task you. I mean, if that was to happen, I would talk to you about SMI conformance, but. Yeah. No comment, don't say anything. Fair enough, did we, Nikolai or other, did we miss anything? Or is that a wrap for today? Yeah, I'm fine. Actually, just one last question for me. Sincu, do you characterize some of your current focus? Any particular goals that you're chasing after, other than the one that you generally described, or any particular questions that you're looking to answer? Yeah, I mean, I recently started this work. So I'm a little bit in a still early stage. These are some of the gaps I'm noticing with respect to what we wanna help Telcos with. But yeah, so soon I'll have some more data and have some more information as to what tools and I used how and what could tools look like, things like that. So in coming weeks, I'll probably have more feedback as we go. Sounds good. Folks? Yeah, I'll take a look at the mystery surely. And yeah, probably we can have a chat offline or something going on. That'd be great. That'd be nice. Thanks. Well, I'm much appreciated at all. We'll have this topic in a couple of weeks from now, but I anticipate some slacking in the meantime. So thank you all. See you in a couple of weeks. Talk to you later. All right. Thank you. All right. Bye. Bye.