 Hello, hello, hello. Hi, good morning. Silly, are you based in West Coast US? Yeah, I'm Austin, Texas. So, okay. Yeah, feels pretty good to say good morning though. Generally, generally talking to a lot of folks in earlier time zones and totally and it's. I've found I've been found to be incapable of saying anything, but good morning. When it's morning time for me. And so this is nice. Yeah, that's true. At least, you know, sometimes you have to work with your folks and the timings are crazy. Early mornings and whatnot. So it's good at least we get to sink in normal times on. Yeah. Yeah. Yeah. That doesn't stop me from drinking coffee. After, after the morning. Except for engineers. So the meeting minutes are in the chat. I'll probably we'll paste those again in a moment or two. I'll go ahead and begin to share the screen. Actually, as we go to do that, if. If you're able to access the meeting minutes. Go ahead and slap your name down. We'll, we'll get you on the docket. I would know that background logo anywhere. Good to see you. Yeah. Yeah, I am. I actually deal with that logo a fair bit. Yeah. I gotta tell you it's sometimes it's occasionally a pain in the romp because. If you're dealing with the SVG version. It's got all the vertices and. If you're not careful with how you drag and. You drag it. Yeah, you can actually drag all the vertices. I get it. Yeah. I feel like I violated a couple of a couple of copyrights or a couple of. Rules of how you're supposed to use logos just accidentally. Well, fair enough. So, hey. Yeah. So, you know, first five minutes are generally a bunch of bad jokes for me and people being kind laughing. So thanks everyone for coming. We're about five minutes after let's. Let's get up and roll in. So this is the. February 4th, 2021. The CNCF SIG network meeting. Public meeting. All are invited. You don't have to be a member. Just have to put up with a joke or two. And then you can speak up. We're. The things that we do hear the things that we discuss here are furthered by your participation. So. So please participate. A couple of you are familiar with this and some of you aren't. So I will say it. And that is that the CNCF SIG network. Has. How do I be concise here? So you can see the CNCF SIG network is like other SIGs. It also outside of its own charter, which I won't cover. It also includes all two working groups at the moment. One is. For the universal data plane. API. Which is sort of an Envoy's. Set of API's and there's a working group there. There's another working group that's the service mash working group. And it has a few different initiatives. And we've agreed over the last few months to use this time to advance the service mash working group initiatives. Unless a SIG network topic. Bumps it down some. And so we'll speak to a couple of SIG network topics. If any of you have SIG network topics, by the way. Please put them there. If you have, you know, other, if you have topics now is the time on what we'll get to them. You can see a bit about service mash working group and its initiatives here. We're going to talk about two of them today. Maybe one of them briefly and then spend a fair bit of time on service mash patterns. That's the focus of today. So first topic up is the ambassador. You're all familiar, no doubt with ambassador. It's been out for public review for a little while. It's proposed to be adopted at an incubation level. There is. Some discourse. Some happening on the project's name. And a potential renaming. There's some public discussion there. That's sort of the state of. Of that proposal. Any. Any comments on that topic before we move on. All right. Within the service mesh working group. The last two times we've met, we spent most of our time discussing. Well, a collection of concerns around. Really run service mesh performance. And one of those. So there's the service mesh performance spec. We'll talk about that in a little bit. But there's also a project called get nighthawk. If you're not familiar with nighthawk, it's a load generator. That was born of the envoy project. So on boy has. A low generator written in C plus plus. It's called nighthawk. It's. Gaining in popularity. And in part. To assist. Its popularity. And to help get it into the hands of many. There's an initiative. I'll call it get nighthawk to. That has a couple of aspects to it. The core thrust of the initiative is to create some convenient distributions. Of nighthawk of that low generator. And so the last couple of times we've met, we've talked about what the purpose of this project is. Some interesting things that nighthawk is capable of. Pretty, pretty neat. We're bringing in. We're partnering with at least one university. And it looks like. A second, which would be NYU. And a couple of professors at each university to do some. To ask some questions. And hopefully answer them. So. So while we won't cover this project again in depth today, I will highlight that since last we've met, there have been a number of. Um, actions, tasks laid out. The community members, you know, contributors are picking up. And I don't know. That they are represented on this call today. So I don't. So at least of the individuals that have. The tasks that have names next to them. So. So next time, next time we'll. Touch base on. Get nighthawk. And then I think, you know, any comment or question on. Get nighthawk. Just a comment. So Otto and I got to sync yesterday. We had a brief chat about some of the. Requirements or some of the things that you could do. In terms of low generation. So we were looking at how. We could have a. We could have an environment and a standard methodology to. Have a consistent performance. Using some of the tools like nighthawk. So that no matter. How many times you run the latency is kind of consistent. So we're looking at now, what do they'll do all three aspects? What are the things the tool could do in terms of. Exposing the four and seven parameters for tuning. And so we just started this discussion. So hopefully we'll have some. Progress there soon. Very good. Good. I'm keen to hear more. I'm synced that. Yeah, I'm key. Yeah. Maybe I'll leave it at that. Hey, we're, I'm overdue to spend some time with you too. And auto as well. Definitely. Yeah. I mean, one thing he mentioned seems like AWS and. Followed by Google. They've started a set of standards. To establish for some of these benchmarking or deployment. Rather, I think for benchmarking environments. To have to establish a method to measure and also. Have a consistent performance. As I'm not sure the details yet. So that's something. Auto mentioned he will share soon. Looking to see what. Or they are. Very good. And are those. I take it that that's separate from the. From SMP. Possibly. I'm not sure. Yeah. Okay. I mean, yeah. You know, SMP also. Okay. Okay. Yeah. I'm good. Yeah. So I still don't know yet. So I think. We have a follow up email. So. Nice. Good. Good. Good. Yeah. Well, so the next topic up is service mesh patterns. So this is so one of the initiatives that within the working group is. Well, is trying to parlay a little bit with. There's a, there's another service mesh. Group within the CNCF. It's the end user group. And those, those folks get together. I think it's about once a month. I haven't attended a meeting. They'd recently invited us to. Come and collaborate, which is fantastic. We're, we're. A hopeful to listen to a lot of the challenges that they're having with service meshes. Give that feedback to the projects. As well as. Well, a few things actually one get a better survey going on. All of you have probably seen various CNCF surveys that have been done about the usage of particular technologies. The one for service mesh is egregiously wrong. And as I go to say that. I feel like if people think about it, like that, that sort of feels like the fault of something like SIG network, like maybe SIG network should help with that. It should help make sure that it's done well. And part of that would be parlaying with that end user group. And so part of discussing with them is also trying to help establish some patterns and some best practices, some, some usages of service meshes. And helping. Propagate and educate. Current users and then forthcoming, you know, all the thousands and thousands of others that will come to use service meshes in time. And I've spent a lot of time in going through these in depth. I'm here. There's a list that shared and kind of categorized. There's an interesting, you know, if you think about the way in which software is written. And design patterns. I think that this, the approach that's, that's being attempted here is also to. Isn't the same thing. Yeah. Yeah. Yeah. Yeah. It's to, you know, as people discuss circuit breaking. That just as a. A random example. Bunch of what, you know, there's a bunch of considerations around. Sensitivity of your circuits and when should they break, when should they open, how, how quickly should they close back? I'll go to them to discuss, you know, patterns of behavior to, to examine. And that's probably all context. That's all specific to the context, the applications, the workloads that are running. Yeah. Yeah. Each of these areas, each of these functional areas within a service mesh. Deserve a bit of analysis and a bit of. I'm trying to think of a word other than pattern. A bit of. Promotion of. Sort of the common approach, the common use of these things. As we've been iterating on these and working on these. I'm trying to educate. What, what's. What we'll try to do is come forth with a simple way of articulating capturing that in YAML. Why is that the goal? It isn't that YAML is the, is the. That's the goal because when you're discussing a pattern like this, you're discussing the use of a service mesh agnostic of the underlying technology, whether that's whatever mesh that is. Because. At this, at this day and age, like. There's, you know, 20 plus service measures out there. They pretty much all support a retry. Okay. So when you, when you give an example and you're promoting an understanding of how many retries you might want to configure on your services and the considerations that you'd want to account for. If you want very high resiliency, great set 100 retries. But there's a negative ramification to that as well. And there's considerations around each of these. And when we give examples of those, it would do a disservice to. The other 19 service meshes, if the example is given just for. Linker D or just for console or just for whichever. And so we want to be able to articulate these patterns in an agnostic way and in a simple and understandable way. And doing so in YAML makes a lot of sense. And that way they can be shared around as well. People can modify them and tweak them. Well, it's one thing to have that YAML. As a point of reference. And it's a whole nother thing to have that YAML as not only a point of reference, but to be actionable as well. To be able to take that. Apply it to a system and have the system. Apply it to a system. To have that kind of behavior or apply the configuration, basically apply the pattern. Sometimes that's a statern. I'm sorry. A static application of that pattern. Like to just apply a configuration to a mesh. Sometimes that's too. Over time. Adjust the configuration of a mesh because the pattern calls for. Like a canary deployment, for example, is like, Hey, it's a, it's an overtime thing. Or an over a certain activity thing. And so. And consequently, this leads us to. Well, a specification like open application model on. Which is taking on a really hard challenge of like. Describing all the things. Let me make a snide remark just for a moment and say. I'd said there's 20 plus meshes actually just. There's kind of be another mesh announced. Soon. I don't know that it will make as much of a splash as some of the other ones that have been announced, but. But hold on to your, hold on to your horses. I guess. There's, there's a 29th coming or whatever the counts up to. So. Okay. So. We're talking about the patterns, the way to articulate those to capture that in a. In a succinct way, hopefully in an understandable, understandable way, hopefully a way in which doesn't require. Whereization in every five Kubernetes manifests fully described. This is a description of. And maybe it's my aufe can if it is, please, please comment and please, but a description of a little bit of this, this challenge. It goes something like. If you want to describe an application and a workload. All of its infrastructure. I don't know that there's a single or rather I would say I know that there isn't a single necessarily a single definition for this before on became a little bit popular and Lee Zhang who's on the call and in group and the set of contributors around on. There was I was working to you might chuckle. I was working with some folks at Turbonomics to create another foundation, which is just what the world needed. Another sibling to the CNCF it was it was really long name it was. Oh gosh application workload definition and performance or and management was and something foundation, a bunch of lawyers involved a bunch of people involved from various tech companies to get that formed, and eventually that was set as that effort was set aside. And things like all my life, like, you know, all and some other related specs have come forth and so. Anyway, as we go to solve this challenge around how to take, how to describe a pattern agnostic Lee and then have a system, take that. You've been looking, you know, the challenges here are, you can't do all that in Kubernetes it brings it lets you describe a lot but not everything in SMI and it lets you describe. Well, it's a, as an SMI maintainer it's fair for me to say this that it's a like, like every project that's here, it's growing, you know, it's continues to add more to its specification. And so right now it's kind of focused on a lowest common denominator set of capabilities and that's fine that's that's appropriate. What the spec hasn't yet accounted for very well is really having an extensible model for maybe describing capabilities of a given mesh that are differentiated from another mesh that aren't that isn't really, you know, common or ubiquitous functionality across the leaves a little bit of a challenge. I'm not saying SMI isn't a good spec or doesn't have a set of good specs as not the and service mesh performance SMP. It is focused on capturing and characterizing service mesh and workload performance. And so it doesn't capture all of what an application is. It doesn't capture all of what Kubernetes has and all that so it so we're left with a bit of an an underlap the way of I think, visually, visualizing this is a little bit like this, that to just quickly say, Hey, there are, you can describe things in Kubernetes you can describe things in SMI, some SMP. There's some amount of overlap between them in a good way. You can describe like if you wanted to facilitate something like a canary deployment or if you wanted to apply a pattern and have it be affected over time. You could describe some of that in a workflow and a definition. Maybe that's an Argo CD thing. Maybe that's a cadence workflow or temporal or whatever there's a lot of engines out there. I believe policies and how you describe things. Well, I'll mention this that like, part of what you might define either in a workflow or in a policy would be when to, maybe it's the initial application of a number of retries that you're trying to do that you want to set across all of your services, but maybe there's a policy that needs to be evaluated over time that would say, Well, change that change the retry configuration based on. And so our hero steps in, I think, anyway, which is kind of where we get to to own, just to say, this is aimed at trying to describe, and Lee, I'm like, you might want to step in and if I if I totally bastardize the vision and the definition of all like you're going to have to, which is like, holistically addressing workloads and really like a lot of their concerns, and the specification hasn't addressed all of the concerns that are possible. But it has a highly extensible model for building out support for, well, for traits for building out support for different application and workload concerns. So this is what so we want to do. So when a pause there, as I think I've characterized kind of the challenge and a pause there and want to do kind of a demo and talk about how it is that Meshry as a service mesh manager, a multi service mesh manager is a well positioned tool. It's a tool that was has been originally created for teaching people service meshes and doing it well. And so promoting patterns and having Meshry support those patterns falls right in line with this vision. But character right but finding trying to overcome the underlap between what you can describe in these various specs has been a challenge. And so part of the community there has been looking at home just very recently, it's done a prototype of trying to integrate home and overcome this challenge. And we're going to do a demonstration or kind of walk through how the two have come together. So, but I've been talking this whole time so let me let me pause before we sort of switch modes into demo mode. And so it's all asked this is a lead that I, you want to expand on the definition the sort of the vision of open application model and maybe introduce it to introduce on to some folks that might not be as familiar. I think most part of your saying is, is, is right it's correct. My team member are involved in this project. So, I can't speak for them but I think from my understanding the motivation behind this model is that you will have a way to defy application user facing primitives, which can make it either for us to manage the, for example, Kubernetes resources. You used, you need to use for, for example, rollout or you want to do traffic management because like today you have to manage a bunch of deployment services in graces but how I can use a much simpler way to defy all of those things. For example, by providing a single YAML file which name is application. I think the most important motivation behind this model and of course just that you mentioned that it works for, ideally it works for any kind of platform including Kubernetes. Actually it works for it already works on Terraform. And I think some people are working on to make that work with cloud formation doing that. So, in that sense, it's more like a universal application definition. So you can defy application at top of different runtimes in an easier approach. And I also know that there is integration of Ohm with Helm, which is straightforward because I can use Helm to package those YAMLs into an application and then I use this model to describe that. It looks like I will have the application CRD but underneath the application CRD will generate where you use a Helm chart to render your real YAML files. That is also one in Polish I sell in the community and I think it's also very interesting but yeah, just a mention it's essentially a model to make it either for people to defy application especially if you want to build something like a application platform at top of Kubernetes or even old code formation or something like that. Right. Thank you. Questions, comments on open application model or on kind of the challenge that I was articulating before. I do have a little bit of thought here. You have the statement that just sits badly with me, which is cloud native is hard. And it strikes me that that may be true. But it's basically an indication of more fundamental errors lower in the stack. Right, because if you're if cognitive is hard, something has been fundamentally done poorly down the stack and I'm not sure that just bandating over it on top of that is the right answer. We may need to get to the root of what exactly is going on is making this so hard, because the minimal toil is a fundamental principle of cloud native. And so if we are actually, if what we're basically doing is uncovering that it's hard. The one thing that I know never actually does make things easy is bandating over lower level mistakes that always just makes things harder. So the question I would ask is why is finding the part you've listed a bunch of things you're like, why are we dealing with IP tables rules, right, if a developer has to know about the tables rule something is fundamentally very broken. You know those sorts of things if a developer has to actually deal with DNS complications. We've got a fundamental brokenness in the underlying pieces of the platform. Yeah. Yeah, I think yeah, I think it's a valuable input. I'm not very familiar with this website, honestly speaking, it's maintained by Microsoft folks. So I will definitely like them know your feedback. I mean, but that's just one of the things that strikes me and I am not disputing the fact that actually all the things are saying they're true. I'm just saying that yet another model on top of broken is going to be more complexity on top of broken. And so we should ask ourselves how do you feel what is it that actually needs to be fixed on the level. If you if your question is about why we need abstraction top of that. I think it's basically how the computer science work. Right. Well, it's fine, abstraction is fine. Yeah, but the point is some of what you've got there is literally stuff that should never have been leaked to the point that it is. The whole game is you said computer science is putting the proper facade on it so that you don't have to leak all the nitty gritty details at the next layer. But a lot of my quality of his heart is because a lot of those details are being leaked to a higher layer than they should be. Does that make sense. The IP tables one just jumps out of me for example, I think there's no reason any developer should ever have to see that. Yeah. And so in some respects on Lee I don't even know that what or the way that I'm internalizing part of what it is saying is not necessarily direct is and it's like it's side swiping open application or own but not I mean not in a negative way but I mean it's not necessarily entirely directed at home either more like, Hey, in Kubernetes why in Kubernetes. Are we continuing to expose. For one or like what why or why something is. Yeah, to be clear, I'm actually not taking this way to put on, home is the one who who's identified in my mind problems that are not homes problems. They're problems that were created by the people. And it's trying to do its best to the layer that it's not to solve them. But I think somebody should probably also be going down to lower levels and saying, Why are you leaking these completely. Why are they being leaked to the developer. Why are you making cloud native so hard, because even in the case when you do actually not the inappropriate things. There's still value and having a higher level of abstractions. Does that make sense. So it's still valuable it's just something about cloud is hard just jumped out of me and said hey you know that that strikes me that something's gone seriously wrong. Yeah, in some respects, yeah right it means that home is even more valuable, if if that's pervasively happening, although I mean part of your other point is like well. Yes, there's value in that abstraction but at some point, it's treacherous ground for the abstraction to be standing on if it's, if the ground is. No, I mean you're right. It makes it makes it makes them even more valuable. It's just someone should go back to the communities folks and say look, these things are killing people why are you leaking some of these details. I see. So, so I think the point is that we don't. We, I think we will try to avoid a while to say that the current approach is wrong right I think that is that is that is argument right. There's a lot of, there's a lot to be said for that by the way, honestly, so I concur. Yeah, I see a good way to open conversations. Yeah, I see, I see. Yeah, that totally makes sense to me. Very good. Okay, I'm not entirely sure what this is so let me see. Let's see how this settles on people if this is the right way of trying to present this so so Ryan Zhang and Lee, who are have been kind enough to educate some of those that are focused on this patterns challenge about home and being warmly welcoming of like trying to collaborate and help advance some of the traits and and and so we gave it a little bit of thought give it a week or two and and and are trying to use on to capture to describe a pattern that I don't know that the color coding here really helps with an understanding but it's about it's about A long in this case actually it's it goes from from out here to here and there's things that are wrong with it need to be fixed but it's sort of three sections it's like hey, there's a service map in this case is the example there's a service particular configuration that that if you want to execute this pattern, despite you know, hey, here's a mesh with config run that mesh. There's a the behavior section which is about in this example it's about a rollout and describing the application that should be rolled out and the characteristics by which that sequence is performed. And in this case like this needs to be abstracted to something like metrics and not talking about you know all of this should be abstract from the you know the specific. Anyway, if you take that file it literally take this definition. And you were to give it to a system that were to integrate with on this is mastery systems you see how this works for us. The juicy diagram to let soak in to your mind let me see if I can. I'm going to walk people through it verbally and then I'm going to hand the ball off to Karsh, who's an open source contributor that's been that tackled this pretty quickly, and wants to give a demo of it in action so. So, as a service mesh management plane meshries, pretty extensible, actually, which a lot of its approach and vision sort of lines up with with all in that regard. And that is, I'll show this diagram briefly, which is to say that each of the components inside of the this machinery architecture are considered for how you might want to have choice or extended to do different things. The architecture itself, fairly simple to the extent that it for the purposes of this discussion, it is two things, or three things I guess, whatever it's five things. Fine. But it's, it's a server. There are individual adapters one for each service mesh that it manages. Those serve those adapters when you turn one on they register to sort of in the sequence here. So they registered their capabilities, they registered their ability to manage a given mesh with this server. And it's the register and the capabilities registry if you will. And great, the systems just sitting here listening and waiting for a user to tell it to do something. So user comes over grabs a command line. So it runs the CLI pattern and it wants to apply pattern, in this case, retries or I think the demo would be on on a roll out. So this CLI interfaces with the rest API here says well, here's here's a descriptor here's here's a here's a behavior here's a pattern please, please make that so taking that simple pattern file. Leveraging all minutes extensible model for having traits and is to take that pattern. Get it into the home format. Maybe I'm going through more details that are necessary here but basically getting the own format. I'm handing that over to the adapter that can interface with Kubernetes understanding that there's a particular set of operations to to execute. Kubernetes to do that sort of it walks through it creates a DAG a directed acyclic graph to step through each of those. Each each part of that that workflow. And, and does its thing. Let me, let me stop share I feel like I have to have not explained that very well, it's probably really simple and I think in concept. I mean, Karst you want to you want to try to show some folks. Yes. I hope that. I can see it. Yes, so I'll just put start with the other young actually would look like. So basically, this is a really simple young. There you are. Yeah, it's quite short you're trying to do a lot of stuff in here. First you're saying that they I need a service mesh. I also want to do enable mutual keyless in the one side by injection. You're also trying to do a roll out in her. So basically, you are trying to define a lot of things and just to sing the young and sit off. So basically I'm trying to kind of trying to deploy multiple elements just for a few evening. I would quickly go through what exactly how basically I would quickly go through how how this exactly works internally. So what happens is a machine office with machine office would basically say that I'm capable of doing this thing so basically there is to make it really which is a broad term for a trade definitions workload definitions and scope definitions which are defined by and then now those are stored in capabilities registry. Then a user user doesn't have to exactly think of what exactly they'd be talking to that is which it after they'd be talking to you, they can just give in the YAML file that is short and they can apply it measure it up would create a tag or a quick because we are right here you can actually create a quite complex of workflows in here that is you can say, okay, but only do it once you have already deployed it's still as well as you have deployed SVP. So I was saying, okay, do the fun as we are on but do it only once for me because it has been deployed. So, so created out of it and we'll ensure that everything happens pretty efficiently that is you can do something quantity so SPC doesn't basically this rule out doesn't actually depends on anything so provisioning your mesh and doing the world would be confident while other is our ones would be sequential because you are just do so. So, now that I'm going to start up machine server, because machine server is going to actually register all of the machine would be resting all of the capabilities and basically trade definitions and those kind of stuff. And we'll start up machine is the machine server started up machine for now is responsible for handling the rollouts. Yes, so all these logs although pretty huge there. Basically, what happened was when machine after booted up it basically passed on all of its capability that is great definitions and this group definitions work with definitions to machine server and log it out. Now, new capabilities have came in, because Michelle registered its capabilities to machine server. You can do is you can run, you should be able to run machine, see the pattern of that and it's a right model exp because it's is to work in progress. We can apply this Yaman, and it was basically to this stuff that you are to do that is provisioning the service mission do rollouts. So, without, okay, it's doing two things. And waiting for, waiting for a city to deploy it and then it will go on and provision for me to use as a student. And once that is done, it's now provisioning the fun and I think might have completed. So, yeah, you've got the message that is the SCC was created in profound of Prometheus is your add on. Prometheus is you are on the fun as you are on this kind of things have already been provisioned now. Let's see. Yeah, we provisioned is new is the version 1.8.2. And that's exactly what we've got in here in this case is terminated because that's what we defined in the gamma five and the road out. This is the first road out so I mean this is the first time that the application was getting deployed so we have all of them running at the same time. What we can do here. I mean, right now you can say that the first version that we deployed was running this into, I mean, on on something goes wrong you basically runs this and now you want to say that okay I want to improve this message now you maybe ready to do a road out if you want to basically do a camera release. So if you come in here, we will again do machine. Now a new version of it will be deployed a machine up is a quite smart they would see okay is your was already provision, so it will not provision is to be it will not provision Prometheus the add on again and those kind of things. It will just do. Basically, the machine would just do Canada and that is exactly what it's doing so you ask it to do direct 20% traffic to the version five and that is exactly what it's happening. So you are getting 20% traffic to version five, while the rest would go to the other one for for the time duration that I've mentioned that right now this is pretty because it's in initial stages so this is not very advanced. So my intention is to be able to define in here but pretty complex thing that is you should be able to basically provide that okay I want to perform a simple in here and if I get me 99 to 50 MS or something, then move it to be 40% or just move it to 100% something like that. So it's in initial stages so that is why this is pretty rudimentary but the end goal is to be able to define those kind of stuff in there and it's because home is also pretty expensive so we should be able to do that. As you can see that now, basically, it has moved 100% traffic to version five. Any comments or One of the last things that was just saying is that part of our. Okay, so to step back and say hey, why are we talking about this because it's the service mesh working group, we're working on patterns trying to educate folks. I'm trying to help them adopt and use cloud technology trying to help simplify the in order to do that. Hopefully like I don't know if it gets much more simple than what almost, you know, is it really enabled around that pattern file that that that YAML. But to be able to take that tell a system to go, like I think that the rollout makes for an interesting pattern to look at, but it's, it's not the focus and the focus here is to have a system that lets people take any one of those 60 ish patterns and leverage them to tweak them unto their own need to explore with them learn from them, change them to begin to establish something of a repository of what those patterns are give them, give them names. Help people be successful help people also understand whether or not they're doing it right, or not very well or. And there's a catalog of, and so I'm going to speak on the least behalf again, like you, and I've only spoken once for like five minutes and so I hoping that you're pleasantly that that that this is pleasingly I don't know if it is to the project but that some of these efforts would ultimately help advance some sort of the catalog of traits that the own project is producing and, and the way that we harsh walk through this demo wasn't necessarily very visual and that's fine and there's, but it. So, because if you bring back up the the you are the measure you I again. Outside of showing that is deal was provisioned. So measuring does automatically detect the fact that Grafana and Prometheus have been provisioned as well. And certainly it will auto, not only detect that but auto connect to them, and will, you know, visually display the fact that there, there is traffic going on that there is this roll out of happening traffic between two different versions. Quite there yet. Yeah, I just think this. This is a very, very awesome idea because we just from Ali Baba side we just received from the complaint from customers that they want to use those mesh by applying patterns something like you said patterns to their application instead of try to use the virtual service. And rules. I don't want to use that so they want to use a rollout want to use maybe shadowing of the traffic. So I think the idea of the pattern is really, really amazing to fix this issue. I really want to take deep looking look into the one question is, are you, are you folks just implemented your own runtime. How that work. Kind of like that. Yeah. Yeah, because here in the machine so actually there's the official part. It will be, it will be free to adapt this to perform the option that they said that they are given off that is the three definition registration and keep the work of registration. So yeah, kind of. Okay, so what is the definition looks like can show me, for example, one chest definition in that case. What exactly. An example of chat definition. Okay. This is. I see, I see. Okay, I got it. You're cool. So I'm not sure if it's possible for you to share out to the project. So definitely try to take a deep look at into it. I think this is what we are trying to pursue in the community, especially on a service mesh side. We're eager to see that there's something which, which can be named like pattern or something other, but it will give users a interface like rollout or any other high level abstractions instead of just the virtual service, which doesn't make sense from any user perspective. I'm really supportive for this direction. So let's look deep into this. The virtual service is a really nice distraction for the level that it lives, but you're absolutely ready. It passes certain point you're just laying out the links in the topology by hand and that's that's too complicated to do. I mean, it's actually a really good example of exactly why you need higher level of fractions, even when building on good lower level of fractions. Yeah, of course. Yeah. So good. Lee, cool, good. I think. Now we've spoken for all of 10 minutes on this initiative. As a matter of fact, like, wow. What an amazing thing you'd harsh. And by the way, for folks that aren't familiar with this fine young man, he's in his junior year in university and worked on this for all of a few days, I think or something like that kind of makes my jaw drop. So, the guy I think is built, it's made up of Jason is the I dare say he has a future in this. So to, I think to help to reinforce what Lee was just saying about like, hey, there's something to this this word pattern or there's something to the that I'm going to see if I can. Do you mind if I grab the share back from you. Yes, I hesitate to say the ball. I know. I know that would be all too comfortable for Ed, if I were to talk about WebEx. Um, the, the. This is a little more I'd rather that I was able to present this probably maybe I should present it here is to say. The reasons are being, there's a reason why they there's a lot of forethought given to how many there are and what they might look like and that's because there's a, there's a book that will be in early release. Shortly, I'm called service mesh patterns. It's going to go through the first 30 years of them we're going to include the pattern file yamal in the book. And then for any anyone who's reading each of the 30 chapters in that first book, they want to try it out. They can take the pattern file and put on, put on to use, you know, go, go do a mesh CTL applyer and hopefully learn and be more confident in adopting and running cloud native infrastructure. In the second book service mesh patterns advanced or whatever it's called to cover the other 30. There's a ton of work to do here. I fear I may end up in a divorce if others don't come to bear. So the reason I make that bad. Well, bad but honest joke is, is as an invite to others is basically say Lee like a ton of things to do with the mesh. They're totally capable. I know om isn't wasn't built for purposes of, of only mesh things and that's actually what makes it very attractive like explicitly things outside of Kubernetes which is where, you know, the rest of the world is makes it interesting. And so what I'm what I'm kind of saying in part to like what I've been trying to communicate to very poorly to sunku is a lot of initiatives going on. Can't wait for people to get excited like we is and move these forward even faster. So, at least, yeah, so. Yeah, definitely. I think that's, it's getting good traction so see more involvement. Other other other comments or questions. Kars didn't step until 130 am for for no hard questions I know that there's a hard one out there. Yeah, I mean, I just shared one of the three decisions that was for MP it is. Okay, if you haven't, if you haven't checked out on it's a, it's an ambitious project, and is, I think architecturally, from my perspective, the this project and is, I think architecturally, from my perspective, the extensibility is. Well, it's it's like designing for performance upfront. And it's sort of like designing it's like postponing security consideration like I will get to the skill will get to the performance that will come after the features and functions and get it. And extensibility all the ittables they like come after but the thing is is. Sometimes you got to think about them upfront and I think the own project did so, you know kudos to that team. So as we go to wrap here, there's a Amy's on the call and she has quite kindly put together a mailing list, a sub mailing list of the SIG network. Hold on. It's coming. Remember, we looked at it and it was like, oh my God, this is going to be a huge long thing so watch this space for more. So, so shortly though will be a mailing list specific to these topics we've been holding off on when I say we I mean, all about you. We're holding off on just unleashing all the service mesh chatter on to onto the broader mailing list which is about core DNS and you know all of the other, all the non service mesh projects so. But we'll put a, you know, no doubt there'll be a link in here to that mailing list in case people want to subscribe. Anybody, anybody have items that we didn't maybe get to today that you know you're going to want to have a conversation on next time. There might be some some things on some get nighthawk things some s&p things I suspect. Okay. Hey, two weeks from now, the third Thursday of the month will. We'll see everybody then. Thank you, gosh, it was great. Yeah. Alright, talk to you guys later. Bye all. Bye bye. Bye.