 off to a slow start as everybody's trying to watch Perseverance land on Mars. Hello, hello. Hey, Leigh, are you in Austin? I saw somebody flash up that looked like Austin person earlier. Yeah, yeah, yeah. You got power and water or how you holding up? Yeah, no, I, boy, I tell you what, it's never been such a luxury to have a cup of coffee. I'm counting my, I'm counting my lucky star. I'm sitting, well, where am I? I've moved my, my, I have a portable desk at this point. There's no, there's no permanent home for me. It sort of depends on where I can get the, so I tell you what, I've got, I'm just full of complaints. Like I've got, I've got endless things to complain about. Let's hear from you. Come on. You'll feel better if you could share. I tell you what, you know, one, you know, this is, this is not news to anyone, but just in case anyone needs a reminder, well, I'll say it like this. So, so the roads are blocked. So we're, we're out in the country just a little bit. We're, we're in Austin. The roads are blocked because, because there's a big hill leading up to, up to the city, so to speak. And so, you know, lots, lots of accidents. And so we're, we're on our own well water. And so naturally the water, the, that's frozen. So there's no, there's no running water. That's no different than some of the other, other folks around here do the electricity is in and out. So then, you know, naturally there's no, there's no heating. So we've just been running there. Anyway, the long story short, like eventually, you know, a few days of that, you kind of come down to melting some snow. And you're just, everyone's aware of this, but we've got a couple, we've got a couple of dogs as a reminder, yellow snow, bad, good, white snow, good. So. Oh, man. Yeah. Anyway, yeah, that's tough. I'm here in Portland. We have, it's been a couple of, I think three days of full icy roads and snow, like about 200,000 folks lost power and no internet and water has been scarce. But luckily, the area I'm in, I didn't have any of these issues, but I could, I could see it's very tough. I mean, if the city is not prepared for that, yes, to for prolonged period, it's really difficult. It's a bit, yeah, depends, you know, just looking at the back, I'm not sure what to think of Jim's background at this point, it's sort of like a slap in the face. That's because I love the snow. I live two hours south of that picture, but you know, I'm a snow fan. I grew up in it and I like driving up in it and skiing in it, playing in it, and I even like shoveling it, but oddly. There's some, there's some healthy exercise in there. There's some, yeah. But yeah, to, to Sunko's point, if the area that you're in wasn't made for the cold, then right, even just a little bit of it gets there. Yeah, no snow plows coming to the rescue, right? Oh, yeah, they turned off. Yeah, yeah. Well, very, well, so this is great. This is like, this is the type of meeting that, that I can get into, like people have got webcams on, people are making corny jokes, even laughing at the ones that I say. This is nice. Like we're, we're, I think we've kind of struggled to find a, get a cadence going, but, but I'm seeing folks for the fourth time in a row. And, and we're really just on the, on the brink, the cusp of some things, I think, you know, Ken, Ken Owens is on the call as well, and Ken and I are overdue to catch up and Ken co-chairs the CNCF SIG network, and he and I had co-chaired the CNCF networking working group as well. And we'd always, like there are endless number of problems to solve in and around networking, massive, complex area. And yet, I don't know, maybe, maybe to our, maybe pointing the finger back at us, like we've, I feel like we've had somewhat inconsistent topics. And so I've been excited about the service mesh working group and some of the initiatives within it because, because it gives us a cadence, it gives us some regularity to some of the things that we're doing, we get a progression. It's, it ends up being the working group ends up being, I think that's a good label. It's being a little bit different than a CNCF, a SIG network meeting, which even the past had done and continues to do quite important work, some of the more important work about reviewing projects, reviewing projects that are proposed for adoption, those that have been here and are coming up for that, their annual review. And, you know, we get to meet new folks, we get to look at new projects, we get to opine on those, but it feels somewhat administrative or it feels, you know, a bit of peddling in place slightly. That's not to do any, not to do any disservice to the work that those projects do and how important it is for them to be adopted and go through that review. And it's critical. So anyway, working within some, on some long lived initiatives, to me is nice. So, so that's what we're going to do. We're going to talk about some of the same stuff that we've talked about before. We're going to lead off today with a bit about, boy, if I can give it to my minutes, a bit about SIG network. Hey, now one trivial openly, I think you weren't on the talk call the other day. I think you said a note saying, hey, no power or something, but looks like they're renaming SIGs to tags, I think you saw that note. I did say it's up for vote. Anyways, I should say more specifically, but you recollect a technical advisory group. Yeah, I will say something like that. I'll personally say as someone who's trying to sort of learn and navigate the world of CNCF, you know, I thought I understood SIGs till someone said, Oh, by the way, there's Kubernetes SIGs too. And that sort of threw me for a whole loop when the Kubernetes guys say SIGs, they talk about Thursdays as opposed to CNCF one. So I think differentiation there for me will be helpful. So it'll mean the change for folks who have been involved in the CNCF SIGs for a while, but I think that's not good. Yeah, I think I would cast my vote to change. I think I had been vocal when we first named the SIGs, SIGs on that particular point in part because the Kubernetes SIG network is a fair like it is perpetually the case that I go to speak to someone about CNCF SIG network. And I think I'm talking about Kubernetes SIG networks, right? Fair enough. So there's there's one quick note about is for now's the good time for a call for topics, by the way, if people have others. So there is an outstanding discussion on Ambassador and whether or not its name would change to whether or not its name would change. I didn't dig into this last time we met to make a quick remark, I'll say that there are other projects or it's interesting to me, the CNCF and the process like you would imagine for any organization evolves over time. And so there have been other examples of hard to differentiate corporate entities and projects themselves that are out there and whether that was the name or just the way that the projects are governed. I think it's interesting to see that I think I think there's just continued maturity in terms of like calling out Ambassador and its name. And I hope that it doesn't mean that projects are being treated unfairly from projects before and what were they and what they were held to. So anyway, just a side comment. So the crux of the core of what I wanted for us to dive to today is to one recall from last time that we met we ended up talking about I think it was mostly on service mesh patterns. We got an introduction to open application model, OM and how you can take a service mesh pattern, the six years that we've been looking at and articulate those in a YAML file in an OM compatible YAML file. So I'm just kind of recapping from last week. You could take that and hand that to a service mesh manager like Mechry specifically have it implement that pattern. In this case, I think the demo was a canary release or releasing a canary and how that service mesh pattern and OM interface with service mesh performance specification and service mesh interface specification. So with that context today, we've got some discussions on Get Nighthawk. So we've introduced the project in the past. We've kind of walked through the goals of the project. And I think everyone that's on has had a bit of an introduction to it, but you'll please interrupt if you've got questions as we dig into Get Nighthawk. I wanted to it's always my goal not to talk on meetings like these. And so I think we're going to be successful in that today. There's a couple of folks who've been working on Get Nighthawk, brief recap on Get Nighthawk. It's to formalize a few distributions of Nighthawk as a performance characterization tool. Nighthawk is a load generator born of the Envoy project. And it isn't well want to help get that into people's hands. I mean, that's part of the goal of Get Nighthawk. There's a couple of other goals that part of the goal is to make Nighthawk compatible with the service mesh performance specification and through its integration with Messery. Nighthawk will be S&P compatible. Yeah, those are some. I guess the question may be out of ignorance when you say load generator is the same as packet generator or something different based on workloads? Nope, same. Okay. Yeah, I can't tell you. I've already seen like in my little world, five different similar packet generation open source projects. TPK's packet gen. There's like Moon Gen and the FIDOFT.IO guys have one. I can't remember the name of theirs. It seems like everybody re-immends the wheel on a regular basis. Not sure whether that's out of a need for origination and a lack of understanding of options or specific elements but whatever. I guess the major difference as Jim here is looking at the HTTP packets based on HTTP 1.1 standard or HTTP 2.0. It doesn't necessarily dig into the packets per second or like the latest stats which most of the other traffic generators looking at. So that's one thing that we've been discussing with Auto. I was the maintainer of Nighthawk. There's a little bit of disconnect between L2L3. You know how these are kind of standards. There's ARC to 544 versus what L7 tools can generate. We have a parallel discussion with Auto there to see how we can bridge the gap between these two or make sure they both are in sync essentially. Thank you. Some load generators are run sort of in a closed loop mode. Others are capable of running in an open loop mode which is to say the open loop ones are I don't know how to paraphrase it but discourteous and how they just blast. Like the UDP fashion sort of or? They don't wait for the, they're not polite about it or they don't wait for the response back to and there's another, one of those different, some of the differences or I can imagine the need for different load generators by way of what language they're written in and what SDKs are available to programmatically interface with the load generators. There's actually an analysis written about two load generators that are within the service mesh wheelhouse 4TO and well and WRK and Nighthawk and a little bit about why it is that Nighthawk, I believe if Auto was here, I think why Nighthawk was written and 4TO wasn't reused. Differences in language choice, differences in how they perform. So, and actually, Sungkoo, do you have any more to add on that or like in your mind were those, where did those discussions leave off in terms of lower layer, better lower layer support? Yeah, recently said we have a Google Doc sheet or Google Doc essentially like Auto had written down some of the pointers as to what we could work on. We hadn't made a lot of progress there yet because I've been chasing some customer issue hadn't really gotten a chance to work on that but he did forward another doc, Google Doc that's being, I think most folks from online community are looking at and what's a good set of features required for load generation for Nighthawk perspective and I could forward that Google Doc here but yeah, that one goes through more details as to what's required in Nighthawk but the features to be added are gaps and things like that and I see folks from many different companies participating there. That's where we are, we hopefully make some progress next week. Nice, yeah. Yeah, if you do have a link to that Google Doc, we'll take some of it right now here. Is, okay, is Vinayak, Udkarsh is here, okay, I don't know that we've got a representative for the Get Nighthawk website but I'll pitch it. I'm on this, I'm Avashak and Dina, do you guys want to, do you guys want to speak to, well, how distributions of Nighthawk are being created? Sure, are you presenting it? Good, so we've put up this doc while we initiated this project to track all the plan of action and sort of all the updates in here so that everyone are aware of it. So looking at that, the first step that we followed while proceeding with the project is that we were able to sort of build the project individually but meaning Nighthawk as a project has several components like the client, the GRPC server that runs or even on the test server. So we were able to build them individually and publish them as an artifact right now manually. So having that accomplished, we put up a list of plan of actions that we are moving ahead, what we're going to do about automating this process and here we go like basically this is exactly the list or a plan of actions that we have in mind. So the main idea is to play around with a couple of GitHub actions. We would work on probably starting with the custom action and then going about building the CLI with different cell architectures and operating systems. We are currently aiming on bigger builds, it could be stable and nightly. But yeah like basically this is the overview of what the current progress of this project is and moving at hopefully the ones that you see here would be actually doable the next time. Yeah that's majorly the update around the project. I'm curious if anyone has a perspective on which of the operating systems and which of the package managers are might be more higher in priority than the next? So currently we are targeting two operating systems which are Linux and macOS. When it comes to Linux, we are and yeah obviously there is Docker image published that is kind of separate from these two individual native builds. But yeah like when we talk about Linux distributions we are targeting YUM and APT repository currently. They have the highest priority and parallely we have for macOS the homebrew which is the most famous homebrew and scope which is an optional but yeah the target is that homebrew would be the P1 priority one and a Docker image build which would be Linux based. Yeah like these are the only ones that have the topmost priority. Gotcha. Yeah I think internally to be primarily prioritized open to and sent us most of our tests are based on that and YUM and YAPT those are the things we care about too. I think sent us is changing, the hat is changing the way sent houses are distributed going ahead or released instead of a full blown release it's doing I think a rolling update or I have to check. So it's the release model is changing but maybe that's something to worry about in the future but yeah for now it's good to go. Okay yeah we have some. So one thing Otto was mentioning was performance CI using Nethawk so if I understand this effort the difference with the get Nethawk effort is more in terms of enabling Nethawk as a like a commodity tool to be or a package tool that one could leverage using YUM or YAPT or are you also looking to establish a CI where you could continuously run the performance tests on Y for example? Yeah both. Yeah thanks for asking. It's a little bit confused I think that there are I mean to your question that there are a couple of highly related efforts going on and when you step back and look at them all it makes a lot of sense. There's like in context of service mesh performance that specification part of the goals of that initiative are to help the world understand different performance characteristics of service meshes in part why Nethawk and load generators are of interest is to be able to answer people's questions about what to expect in terms of certain overhead or in terms of certain environments or performance like a very specific performance overhead they might get from invoking a particular function or and so there's a part of us standard to help hopefully bring forth some common tooling to hopefully bring forth that common tooling sort of leans into something like get Nethawk which so Nethawk being very useful and somewhat extensible it's like to yeah but kind of hard to build and not necessarily well promoted or it doesn't have its own website like that those are part of the goals of get Nethawk part of both of those projects SMP and get Nethawk get reinforced with like like to Jim's earlier point about there being you know a litany of different load generators available one of the challenges that I've always found is and when you do want to go run a series of tests that it often ends up being some bespoke aren't test harnesses and test framework and test environment in the lab and all the and like basically end up devoting one or two people or more to to just this new to performance engineering right to just this type of and when universally as as service measures gain in popularity people every time I talk to them to have this question around what's the overhead and how do I manage that and what how do I know if I'm running things well and and and if you talk to any of the individual projects they'll disagree or they'll say this one's better that one does this like that you know it's it's a it's a challenge for people to understand it so bringing forth a standard specification enabling people with easy to access tooling and then more than just getting Nethawk into their hands which which is great but it also means that people will still need to build some what I would consider as bespoke tooling around get Nethawk for scheduling when it's going to run running the same tests consistently tracking the the feedback in the reports comparing the performance over time like that then that that's in part how meshry comes to bear in terms of facilitating those things standing at you know deploying your service mesh deploying your applications scheduling and running the the load tests in a either in an interactive way in which people can go look at and examine reports or yes I think is the short answer to your question Tsunku it's sort of yes to both there's like yes you would part of the goal of getting get Nethawk is to through the through integration with meshry is to allow people to put this into their pipelines and by people I mean all all of the users but also the service mesh projects themselves as they go to integrate get Nethawk and and meshry into their pipeline that and they go to use a standard spec a common method of smp service mesh performance as a spec for describing what they're testing and the sort of a standardized reporting back of the the performance of those you know their environment of their release to release like when you step back and sort of look at all those initiatives and you look across like oh you know yeah yeah makes sense yeah yeah and definitely I think that thanks for that big picture kind of view and one other thing Otto was mentioning was looking to build some performance CI for on white using Nethawk so curious if both go hand in hand looks like you know kind of line so one thing I was mentioning to what also was there's a CNCF testbed available I think they said that to run some of these testing I'm not sure if you're leveraging or looking in the plan in the future to leverage running these test performance tests on the testbed I think you've almost covered the synchro your I chuckled because yes again is the short answer I was gonna kind of go into the longer kind of when you look at like hey what is this working group and what are we all trying to accomplish or you know what are those those things there's a few initiatives that get broken down so one of those is yeah very much so that it's been we've had access to the cluster for a long time to do these particular tests part of our challenge in going and actually running the tests and publishing some results so that's one of the things that that all of you here and or others that aren't here like for my part I'm very motivated to go to go do that we haven't done it in the past because a we don't want to get it wrong and make an R set of ourselves and an R set of you know all the other meshes um be without easy to access tooling and kind of without having to write a bunch of bespoke kind of going but you know writing bespoke tooling to go run these like without having a mesh tree there to be able to just go over to an empty set of you know 20 servers or so or however many in the lab and and take a pattern file that says well I'd like to take this mesh or this mesh or this mesh under this config this config this config and I'm using this app or this app or this app and I'd like to run this action this service mesh action so last last time we met the action was a canary release able to describe that in a yaml file pass it to mesh cto so you can schedule it or so you can invoke it programmatically and have mesh redeploy the you know assuming that kubernetes is present on those servers deploy the mesh deploy the app invoke the action um whatever invoke the configuration of that mesh in turn do get get nighthawk like um spin up at least one or maybe multiple instances of nighthawk hit the end points gather the load back from nighthawk presented and then like all of that work too much for us like it's been about a year now that I've been saying pretty much what you just said which is like hey wait a second there's a great resource sitting here there's a common question a lot of people have aren't we aren't we in a working group aren't we in an environment in which we're positioned to help to provide that information to people and I think we're on the we're like basically right at the point by which okay going off and doing that effort isn't a bunch of throwaway work like there's some repeat some tooling that you can use to make those tests repeatable and share and in a fair way like a service mesh performance thank you Jim yeah yeah you you enjoy that snow uh uh yeah yeah that and even to do it fairly right because um so we've been around we've talked to every single service mesh that's out there and one that is yet to be announced like there's there's one I don't know when they're going to announce but um to be able to say well hey like hey there's a specification here you you're all welcomed to comment on it and make sure that's like yeah so yeah absolutely yeah that's a really good start a good question yeah thank you and um and so I don't mean to to to pray on you in this meeting but Sunku you're very much needed in that like like I um we I just don't have the uh for speaking for my part like I'm I'm tapped out but I so much want for us to do that and we're so so yeah yeah if there is an uh an SMP I went through the site um saw that could help but there is a specification in the works um please do include the link in the minutes when possible so they can begin yeah absolutely absolutely as a matter of fact that might be good to tee that up for like the next time that we meet as well as to like hey what let's let's do a let's do a spec review where I mean we'll we'll get you that info in the meantime but to do a spec review to do a walkthrough absolutely because it's missing some things I guess you know like it's a it's good but there's more sure okay so have a check thank you for the update on so we've had progress so nighthawk is being built built locally you've got a collection of folks here adena um Rodolfo who are gonna go off and spend some time and get up workflows get up action workflows okay we've also had um there were a few folks I think we're going to be able to join today but um couldn't make it that have been working on the website for um get nighthawk in part to um this includes question like given that get this is something I'll I think I planted a seed in auto's mind last time we met was just this question of well since nighthawk doesn't have a project site is get nighthawk like does that become the project site and um irrespective there's a collection of folks who are working on under this domain to to bring forth uh some some designs that have been made is a link to the designs here um hopefully everybody that wants to get to it can I I don't think I'm gonna be successful loading it at the moment but um simple static site this is the scope and structure of the site there's not a lot to show I think if you visit the URL it's going to be pretty barren um there's some open source contributors that are spending what time that they can do a doing a good job um one other item on here um and that is that there've been a couple of I think about three logos drafted as possible logos for get nighthawk and those need to be those three and maybe more need to be presented to you all for a vote we need to open up a poll and have people vote and or suggest that a redesign be done or um yeah what else did we have so if you're looking for an open item open items I missed to ask this in the beginning with respect to the some of the work that you mentioned in the last call that you're working with some universities on some of the service mesh performance aspects or test aspects so if possible I could provide some more details as to what's going on there we have a great question and it's also um well we've done yep um how do I how do I try to be concise here and it's again kind of part of the bigger picture is or like there's some specific items that that we're um engaged with a couple a couple of universities on but really Tsunku actually you again specifically being here or and it could be Ken or it could be um Derek or but it has to be more people than me because I'm just I'm it is um we need a little bit of a a new kickstart with some of them so there's there's two professors um one at NITK NIT and the k part I'm gonna mispronounce um Abhishek do you recollect but yeah it's um Kharagpur probably I want to say that but it's not Kharagpur it's Sulatka I actually don't get down but now I don't have the the paper so there were two universities one from Texas and one from India right one from Kharagpur it's NIT Sulatka right I'm not wrong yep sorry NITK um TK um and this is the this particular professor here on Mohit um Talihani so NIT Kankar kind of yeah kind of not to go okay and uh for some reason his page isn't coming up you'll have to be happy to kind of reach out you if you want to coordinate who should reach out to which university right we can definitely get that going thank you Ken there's another one as well um uh sorry guys I feel like I'm so disorganized just there's so much so there's there's this professor super nice individual um his areas of interest are right within the same kind of networking space you can see some of the prior research that he's done um and he's there's a student or two that we've been engaged with who've been studying and kind of learning nighthawk and part of it's um we're we're pretty interested in its adaptive anyway there's a there's a few different things I think that we can go off an accomplishment by we I don't I mean I mean Ken and I mean all the folks on this call um that and they've been written down um some of these are a year old and they're writing in how they've been written down which is why when Sincu says hey do you have a link to it's like yeah there's a link there it's dated like or it needs to be updated but irrespective we'll send you a couple of links about the projects that we've um discussed with them introduce you to these folks there's a professor from not Cornell it's I think it's NYU um I owe him a response he said that he's wants to engage on SMP and in my mind like again like SMP, Mastery, Nighthawk, Get Nighthawk, Nighthawk, the patterns the CNCF lab like all very beautifully intertwined in terms of or overlapping intertwined so even though he said he wanted to engage in SMP I think um he's open to I don't remember his name honestly one of his students grabbed some web assembly filters that people in the layer five community had written in Rust one of his students grabbed them and rewrote them in C++ so that they're they're an enhanced part of our interaction so now that I've totally digressed this I think that there's two things for us to cover that will help us get organized and help empower Ken and Sunku Abhishek and Adina to move some of this forward one is that we we now have a service mesh working group mailing list so if you did not receive an invitation or if you did but you weren't sure what it was about I encourage you to subscribe because we'll try to this is a good way for us to sync up between the two weeks that we meet so there's a link in the meeting notes the second thing that we should make sure that we discuss is Ken was noting that there's we've consistently done a SIG network deep dive introduction and deep dive at cube cons and that the upcoming cube con has we need to go ahead and reserve our spot I think today and so Ken can yes if you don't mind the really there's just a this generic description of what that talk is and the one from the last cube con or the cube con before that will it's generic enough that it'll totally work and can reserve our spot yep I'll submit it today thank you thank you yeah on that topic Otto and I submitted a abstract for service mesh con terms of and learnings from benchmarking or performance testing some of the measures I mean our work has been on why but yeah so I figured we'd share some of the common learnings awesome yeah I think we should probably like sign up for that service mesh con as well just to kind of make sure they're aware of the SIG and then make sure that you know we can bring in some of those interesting projects into our discussions no doubt um actually until last week it was sorry yeah until last week it was only 10 bucks to register cube con and 24 service mesh con but I'm not the price now do you I don't think they change the price on the service mesh continue I think the I guess day zero events they call them but they might I I am I have the fortunate benefit of being a end user so I get I don't have to pay for cube cons but I do have to pay for the service mesh con so I have to kind of factor that in but there was no polite way of mentioning that was it again it was just kind of no I just had to quantify it with being the end user versus you know Ken thanks for raising that up actually it had been suggested that this SIG um potentially participate in the program committee uh for something like service mesh con which I think makes a lot of sense the same thing had been suggested for the security cloud native security day the cloud native completely agree makes a lot of sense um Ken there had been there's two other topics that come to mind that um that I'd that I'd love to see us do and and is something that I think that that it would take an individual like you to um to to to force us to do it and that is that is we've been talking about um the the CNCF poles that go out that that inquire about usage of certain technologies yep and how um and how woefully underdone the service mesh um it was bad and uh uh so yeah I wonder if we I wonder if we I did bring that up a show that she needs to like run that by us because it it did not come across very well yeah I mean it just like some of the choices weren't even service meshes and so it's just like right I think they get in too big of a hurry sometimes to just publish these poles but they're not they're not very useful when they do that yeah it adds to the confusion I yeah I'd like to see us um not not do an end user radar but do a and collaborate with Cheryl if she'd like to but to do a or maybe if the if the end user radar for service mesh is coming up is to to collaborate on that um yeah definitely I'll I'll make sure to to ping on that again so I I have asked to a couple of times to include the SIGs because it's it um even if it's an end user poll you still want to have like the right choices in there and the right questions being asked it would be helpful to the SIG it shouldn't just be you know what are end users in general doing in this space it needs to be a little more guided that's some of the yeah exactly yeah exactly I I had listened in on the last um service mesh you um end user meeting and can I forget the name of the individual but he was trying to figure out if the proposal for multi cluster in Istio was really going to work for people in the way that they namespace their Kubernetes clusters and and it wasn't going to work for him because he uses or their organization uses the same literally the same name for the namespaces across every Kubernetes cluster and the multi cluster design for Istio was that they had those name the namespaces had to be global unique and so it's that kind of a thing to your point Ken that I think would be very helpful to we're getting that feedback back to the projects themselves just some of those best practices one one other item here Istio con is next week by the way that there's a we'll be giving a two and a half hour workshop on Istio we'll use meshry to help teach people which implicitly we'll use nighthawk and at some point we'll be using get nighthawk it'll be you showing people introducing people to smp so to to ken's point about like you know another venue for introducing some of these works to people at service mesh con Istio con for sure sunku did do you recall did the cfp come back with um the the talk being accepted or have you guys heard back yet uh no I think um march um second week a third week of march is when we hear back from them uh deadline just ended uh actually deadline is tomorrow for cfp and yeah no next month ken has this one very good can also also socialize sounds like ken you'll end up socializing again with um you'll test the waters with uh Cheryl if if uh ken if she doesn't come back I'd I'd like to help you help you if I can suggest it like this so I'd like to help you organize a poll we get some some questions down and and see if we can get a really accurate one going on okay so we've got about about 10 minutes left if we if we speak toward we turn an eye toward next week or the agenda in a couple of weeks um doing an smp review in terms of where that specification is at and what open holes it has it sounds like that so this one um so sorry and the topic that we started the university discussion um you know like to participate there see what we can do but is there um like a common um not not necessarily google doc but a common place where ideas are being discussed or uh shared as to what what's what are the things could be done or being explored or our interest from the university side is there a place common place for this discussion it has been to to date not by design but just by where the activity has been and kind of where we've gotten in touch with people it has been um in this slack here there's a performance channel and a web assembly channel um we can pull those discussions out like I very much so desire to have those discussions coming into here um but we don't have a regular meeting meeting cadence on those I'm hopeful for this to be that it the timing of when we have this call is is tough for some of those folks but um Sungku yeah I think being getting vocal there to start and then Sungku I'd like to want to find a time with you to get some of these a bit more better just you know better described on paper um get the SMP introduced to you some more some you know better yeah so I think Mithika and I have replied so tomorrow we could uh sync up oh that's great yeah are we missing anything here or do we are we gonna get some time back yeah just about to say on this topic the reason why one of the reasons why I'm curious about this especially the performance is um so I've been working on characterizing Envoy on bare metal and what we realized is not just the tool but there are so many combinations of how we could you know slice and dice the test and understand the performance of Envoy terms of scaling in terms of connections or latencies what are the case might be and this is just leveraging Envoy sandbox you know so that's published on Envoy website and nothing complex right just the docker containers and then we get into Kubernetes environment you know it gets much more complex with services and you know a number of site cars and whatnot so what I realized um it's too much for you know one or two sort of people to kind of do a comprehensive study right so even though we prioritize the cases that we need but it's just a lot of variations of what could be done definitely I mean in terms of automating this to to what you all needed with Metri NSMB and in a CI test but that would help but yeah so it realizes this can be looked at different angles changing kernel parameters changing L2 L3 leveraging hardware offloads and whatnot so that's one of the reason to see if there's if anyone else is interested to collaborate in some of these areas we could look at some of the hardware offload parts and collaborate on some of the work that we are doing to see how you know so we can enhance mesh performance overall right so that's one of the goals that we have I'm trying to figure out the best way to you could yeah you're welcome to my nightmare for the last year I say mind you but which is like yeah I've said it on stage a few different times like hey people kept asking this question so we want to try to answer it and then little did we realize like what a nightmare performance engineering it or like just the thousands the millions of permutations of one of the talks that was given at actually with the university a PhD candidate here at UT Austin was was very much so just on like trying to like explicitly identify the cost of a given service mesh function like what does it cost to do I'll say some boring thing like what does it cost to do around Robin or something that's kind of boring but there's a bunch of service mesh specific things that are true isn't interesting so one of the things like as we go to wrap up here that I've heard from people from time to time I think the first time I heard it was from was a year and a half ago from the AWS app mesh product manager when I was introducing some of these concepts to him in his you know in context of measuring the performance of the mesh and measuring the performing whether or not app mesh wanted to participate and kind of go through this with us and I think his perspective was well hey so you know we're an envoy based data plane so like you know it's just going to be the same as anyone else who's using envoy as a data plane and I we ran out of time for me to articulate verbalize to him that like I think that's a myopic viewpoint or a naive viewpoint that like this was actually before Istio had refactored in 1.6 to move all the telemetry like hey the way you design your control plane has a big effect on how the data plane works and kind of vice versa and and just because you're using envoy inside doesn't mean that your that the performance necessarily looks anything like a different service mesh that's also using that same data plane now that I totally led the question my question to you is going to be you know do you am I am I off in that perspective do you no I agree definitely you know so there we have at least I haven't done a lot of analysis on control plane yet but what and how we configure control plane Istio or XDS whatever the case might be and how the the individual proxies sidecar proxies are deployed have a huge impact on the application and overall performance and we see that that's what we're trying to analyze to you know change the configurations of sidecars to see what effects does it have on the core performance or IO performance and how exactly we can tune that in respect to filters or respect to what's happening in the control plane how much can be leveraged by the control plane right so those are the things we're looking at I mean even small variations of how we deploy sidecars or your ingress proxy or front proxy as I said sometimes this might be case that front proxy might do a lot of work before it reaches the sidecar proxies right so the how we deploy and our environment can impact the performance quite a lot was to say one more point yeah so yeah definitely and I agree with you you know so control plane and this one's go hand in hand so at least my our team's goal is catering service mesh towards the telecom deployments so looking at the 5g data plane and how like how 5g network functions can leverage service mesh I still in early days and from that perspective but yeah so that's the ultimate goal of where we're going towards and so the networking becomes the biggest part of you know so what's happening in the mesh right so the network data plane and yeah so for sure I agree with you that it has to be very sensitive both control plane and data plane has to coordinate together to reduce the impact of latency well well we've got an agenda teed up for for next time soon could sounds like we'll be able to catch up probably tomorrow yeah I love a meeting where I get there are Ken leaves with some action items that's one of the favorite meetings hey not a problem man I was glad I could actually join a meeting for once and not at my my new company and something I can actually enjoy working on for a while um we just need more winter storms and we'll uh well that's very good um so and I guess that we'll catch up in a couple of weeks um just just to make you feel badly we got about 12 inches in st louis but the snow plows got it off the rows in about an hour and a half and we have no issues we have no water problems we have no you know everything's good here you've got to have to like get some snow plows down there you guys have all these trucks just put some plows on the front I know I know it's no useful than a shotgun to arrive for in the winter time you know it's all I don't know what it's all of like three inches over here I don't know what I saw that in Austin it's never snow it is never snowing well st louis we only get a couple snows a year but we still know how to handle it for some reason I don't know uh okay biting my tongue I can send you some snow if you want for your water I got tons out front and it's all white all white okay good oh nice nice to see you all see see you in a couple weeks see you later take care yeah see you everybody