 Greetings and salutations. Hello. Hello, Amy. I hope that there's no offense taken that the spelling of your last name. It just, I haven't been able to memorize just yet. I want to go by Amy nearly everywhere. Also, I did mess with people by making it smart up here and, you know, I did this. I did it myself. It's fine. See, I came to help. You are not from the government is important. I mean, I could be from the government or like the weird Canadian railway thing. That's fun too. I'm not aware of the weird Canadian railway thing. It's like s in CF. I can't remember. It's too late in the week for all of this. Whatever. It looks like we have like a full agenda today. I think so. Yeah, we've got. Sorry. Too many things going on. We've got. Yeah, I think so. We've got some follow-up items from the work streams that were introduced last week or the last time we met. A couple of individuals that, that I expect to see on the call to talk through two of these. So Michelle, and Nick Jackson are two folks that I'm hoping will show up in the next minute so we can go. Yeah. Does anyone else have agenda items? Cause this is the, the most appropriate time to call them out and put them in. I have a tiny, tiny thing. Housekeeping note. Zoom is about to start gatekeeping, um, meetings on pass codes. Um, it seems like we are going to roll with the universal pass code of five. Um, So, It should be fun time. Apparently you can also embedded in the URL, which, um, which I find hysterical because it renders the entire exercise utterly pointless. Um, but there we are. I have no comment. Um, I'm going to be updating all the calendar invites. Uh, if you get locked out of the room, it's going to be on the public CNCF calendar, so I'm going to, um, I'll be kind of lying in wait to let people in if they get locked out. So, I'm, I'm still just absolutely laughing about zoom doing this. And then you just stick it in the URL. So we're going to replace it. You know, what a dead of fire with embedded in a fire and somehow that makes things more secure. Um, Yeah, but we've decided that the universal pass code is the best way to go around this. So here we are. I'm not sticking at you. I'm sticking at that zoom. Sir, this is a Wendy's. All right. Back to you, Lee. All right. Very good. Well, let's, let's get started. Let me share the meeting minutes just to make this a little more interactive. Um, so it's good. So we've got, um, um, Ed is here. Amy is here. Um, Thomas is here. Josh. Abhishek. Matt. And did. Yeah, I think I called everyone's name. Very good. Um, Josh and is it Thomas or Tomas? Um, it's Thomas. Good deal. Very nice. Very nice to have you, Josh and Tomas. Um, Thanks for coming. Thanks for coming. We, um, just by way of introducing, um, for you part of today's agenda, and that is to follow up on. Um, some work stream items that are within the service mesh working group. And the service mesh working group, the slides and charter for it are, can be seen here. Um, the CNCF SIG networks charter is broader than a service mesh focus. But we've had, we've got a, sort of enough people here that are, have their current focus oriented towards service meshes. And there's enough to do there that there's a working group that's in the service mesh. Um, that's a group that's spun up and in lieu of other more pressing. Um, Concerns with different projects. We're using this time to advance some of the. Um, initiatives within the service mesh working group. Um, there are about four. Um, initiatives within that group right now. We just as a brief recap for. Everyone. Um, we've had a little bit of a, a little bit of a, a little bit of a, a little bit of a talk about service mesh interface conformance or SMI conformance. Um, For those that were like paying really, really, really close attention. I think yesterday, maybe the day before, yeah, I think it was yesterday or Tuesday. There's a little bit of a leak. Uh, it feels like a leak of a new. Yet another service mesh. Um, that's being announced. Um, It's been in the works for a while. I hesitate to name it because it feels like. It was an accidental announcement. And, but this particular service mesh. It uses SMI, um, entirely as its rest API. Um, or rather it uses, um, yeah, its implementation is done through SMI. And that's the only interface that it has. And so I highlight it just as another, um, Some of the steam building behind SMI. Um, the other, another project inside this working group is SMP service mesh performance. Um, another one is with, um, Part of the envoy project, the, the Nighthawk, um, Load generator. And trying to help, trying to, you know, trying to make, um, To have load generators distributed. And. Coalescing there, you know, doing statistical analysis, coalescing those results. Um, Examining things in a distributed world. Then the fourth, um, aspect to our. To this working group is a bit about service mesh patterns. And so depending upon. Who else shows up today, we might actually get to those patterns. So. Any comments or questions on the. Service mesh working group. This, this thing. And so all of those, yeah, all of those four areas we, um, We're going to use this time if Michelle is on to talk about SMI conformance specifically about the tests that are asserted. To validate conformance for SMI. And I'm looking at the. Yep. And so, yep, Michelle, Michelle's not on. Um, Missing the topic and maybe be on record to answer a recent question that Michelle had had, not that, not that we need to be on record, but so one of these initiatives is SMI conformance. The more service meshes that are out there, the more that, um, people, uh, projects might like to. Confirm. Sort of officially that. Um, They're that they are in conformance with the APIs that SMI sets forth. To perform this type of conformance. Really it ends up most of the tests end up needing really end to end tests, which means there's a fair bit of, um, Provisioning and like environmental provisioning. So, um, I think that's a good example. Workload of load to be generated for that workload if necessary. And then to, you know, assert that a particular behavior would have happened and to validate that behavior. And so there's a lot of. It's kind of a lot of tooling. And so the service mesh management plane. A meshery is. That's what's being used to help. Um, Run or orchestrate those. Tests. Um, So, um, I think the piece of work has been, um, Well, something that was established as a goal. I thought about a year ago, last October. And a couple of. Um, bright students have worked on this for a few months. And then they sort of handed it off to a couple of other bright students. And. And we're essentially, um, at a place in which this, This is the. There's, um, Every as a tool supports eight different, um, service meshes today. Some of the later ones that it supports our open service mesh, Kuma are kind of prominent examples of two service meshes that, Uh, claims support for SMI. validate SMI with today. There's kind of a lot to, not a lot, but there's a few concerns specific to how you would do conformance testing or in this way. One is that, you know, similar to there being, I forget, like at one point there was 80-something Kubernetes distributions, maybe there's more or less now, I don't know, but to be able to claim that you are, that the software that you're shipping is in fact Kubernetes, it needs to adhere to and behave again, you know, in accordance with the signature of Kubernetes APIs. And so, you know, in the same fashion, should a service mesh claim conformance with SMI, it should behave and respond in accordance with that specification. And so that's what this initiative is about. So if, as each individual service mesh project and as just service mesh users are able to perform validation, it's important consideration that one to acknowledge that not all service meshes intend to fully deliver on SMI specifications. Some of its specifications, they just don't ever intend to have that capability. So it's important to acknowledge that there's a difference between capability and compliance. So if the mesh has that capability, is it in compliance? Or if it says, hey, it's never going to have it, then that probably doesn't, you know, it doesn't, it's not failing a test because the test is inappropriate. That's kind of a point of discussion, I think something that probably not enough eyes have been on. Let me call upon the collective minds that are on this call right now and get in a couple of opinions. This thing about capability and compliance, no doubt each of you understood the difference of what I was articulating there, but as you would look at a report that says these service meshes are in compliance or not, does it also make sense to you that that same report would itemize particular APIs or a whole, an entire API that a mesh will never be, you know, like is it, that a mesh will never have that capability? Like is it possible to be, this is just a mind exercise for everyone on here. In your mind, is it possible to be compliant with the specification and yet not implement a certain percentage of it? So it's a lack of implementation. Yeah, so traditionally speaking, if someone says I comply with MumbleMumble, I would expect whatever MumbleMumble is specified to actually work. And so if you have sort of a partial, if you want to get a general compliance in a partial way, you sort of need some way to talk about the subsetting in a sane sense, right? So you can say, okay, we support SMI and we support, you know, profile one, profile three and profile seven. So that you actually can understand clearly what's being done and that it's being done correctly. Yep, I agree. Here's a, to play this thought out or to see if this makes any sense to be able to, so some service meshes have part and parcel to their design. They have gateways, like an ingress gateway and an egress gateway. Others don't have that as part of their architecture. They're more of a bring your own gateway if you want or bring your own ingress. And so to them, an SMI spec that calls for gateways to be able to be, you know, for you to be able to configure a gateway in such a way. Do you, is it still like just sort of edit in accordance with what you were saying? Generally, like it would be, in most projects, it would be simplified to just language like, well, assuming that they're passing, that service mesh is passing every other test, and that the gateway tests made up 30% of the spec, this service mesh is 70% compliant. I don't think the percentage tells you anything because I could be 70% compliant in the way you just described where I just simply aren't, I'm not implementing a gateway. And so I'm not doing that, but I have stellar compliance on everything I do implement, or I could be 70% compliant because I'm a complete shit show. And there's literally no way to distinguish there. It may be useful to think about how long running standards have dealt with some of this. So, you know, I come out of the more traditional networking world, and you've got standards like BGP that have literally been with us for decades. And so if you're saying, okay, well, I'm compliant with BGP, literally, that never means that you're complying with 100% of the RFCs because no one is. What you'll talk about is sort of like, I'm complying with this or that, you get sort of a common shell of them that most people comply with, and then you're like, well, and I comply with this RFC or this draft, which brings new features to the protocol. Now, there's not a lot of mechanical difference between somebody introduced to something new to the SMI spec versus you don't support gateways. In both cases, you need some reasonable way of saying very succinctly what you do and don't support. So basically saying, being able to say, I support, and part of this has to be back to the SMI community, right, about how they classify their stuff, because they're the ones that should come from. But being able to say, I have 100% compliance with SMI core, and I have no compliance at all with SMI gateway, for example. Okay, well, clearly, you don't support SMI gateway. It doesn't mean that you're screwing it up. You're just not even trying. And that's really quite different, because somebody who has a really poor compliance score on something they're trying to do, that could be a poor indicator in general of software quality, where you've got really, really stellar compliance scores on all the things you claim to be compliant on. That's quite different. It pleases me to hear you say that, yeah, because as we were talking over it initially, I had brought this up, was to really sort of acknowledge that, hey, wouldn't really feel fair to some, or like it's not necessarily about fairness, but it would feel like a misrepresentation that a given mesh might have a stellar implementation, but always be represented as a 70% pass rate, because they... Comparison often helps. Like, if you want to talk about not only being useful to users, but unfair to service meshes, right? If I'm a service mesh that has nailed the part that I'm doing, and I have a 70% pass rate, but I'm doing 100% of everything I set out to do, and I'm doing it correctly, to have someone look at a table and sort by pass rate and see a service mesh that's trying to do everything with an 80% pass rate that is just screwing up left-right and center, that's a deeply unfair way to couch the comparisons, and very unhelpful to the poor Schmuck who's trying to make a selection decision. So part of it, so the visualization of the test results in some respects, or like, but whether it's visual or just the table or the result set that identifies the posture of a given mesh, according to the specs, sounds, in this discussion, sounds fairly important to be able to articulate. So there are four SMI specs. I mean, I'm saying some things that some of you know, but there's four SMI specs. These simple verbal statements are high-level descriptions of some of the tests that will be asserted and verified. Some of that is like, you know, you do a traffic split, you deploy a sample app, you send, you generate some load, you validate that of the 100 requests sent that 50 were, that you're doing this end-to-end verification, and each of the tests are given then a unique identifier. And so the thinking here is that the result, the table that's displayed would, or the result set or the spreadsheet or the whatever that's displayed, list each of the individual test identifiers, you know, the monikers for what that test is, whether or not the, and the thought was whether or not the mesh is capable, and if so, what their status is in terms of compliance. And I don't know, this might be more specific than it needs to be. It might be that, you know, that there just needs to be a kind of a binary black or white, they're either capable of having gateways or not, not like, oh, they're partially capable because they can, they're capable of having an ingress, but haven't implemented an egress. So yeah, so that's, and you're right, like this is... So the model I always go through is somebody making a selection decision, right, which is, I'm going to go and I'm trying to make a selection decision out of a mesh. So there are a set of things to care about. There are a set of things I don't. Maybe there are a set of things in the middle where, yeah, it'd be nice to have, but it's not really a deciding factor. And how do you present to them the information that enables them to crisply see how to make that decision? It makes it, yeah, it makes a lot of sense. Hey, good. I think... Yeah, within a particular shell of features, percent pass rate is fantastic, but failure to support a feature doesn't strike me as the same as failure. Right. Yeah, at least not in many cases. That's helpful. Yeah, I wish I'm an SMI maintainer, but I'm one of eight. The other two people that I mentioned were, it would have been three of eight, or like it would have been, this is a fantastic input. I'm remiss. I'm sad that there aren't a couple of others here to help concrete that. Maybe just the last item on this topic of SMI conformance was... Michelle was going to come today, and I think it was for the first time sort of earnestly digging into trying to, now that she and her team directly represent a service mesh, was coming over to engage and identify, understand whether or not the tests that are defined today are all that we need, and they're absolutely not, like the team of open source contributors that have put this together has only taken, only defined a certain number of tests. We really need the rest of the community to, the SMI community to articulate what those tests should be, and so as she's looked at it, she'd asked this question here, and in my mind, I think the essence of the question sort of comes down to, well, the answer that many of these tests, like a traffic split test to, and please correct me if I'm, if what I'm about to say, if anyone sees it differently, like to validate the traffic split as an example, splitting a certain percentage of requests to one end point to the next, to verify that a given service mesh implements that in accordance with how SMI has defined it. The reason that we put this tooling together is in part based on the premise that I don't know that that's, that you could with high confidence affirm a service mesh's compliance without doing an end to end test, or without doing, if you think about that for a minute, like, yeah, you know, the nearest related project to this SMI conformance tool is Sonoboy for, which does, you know, conformance testing for Kubernetes. And it's a batch base. It's a different system. It doesn't have all the same complexities that running services and sample workloads has. Cool. So anyway, the thought, my perspective is that, yeah, you need end to end testing. And Michelle's example of a need that they have is a perfect test case. So any other comments on SMI conformance testing? It was two other topics, at least on the agenda as it is right now. Let's, given that Nick isn't here, because he was going to lead this discussion on SMP, let's skip over to this third one, which is about service mesh patterns, identifying patterns and helping people understand them. So hopefully everyone is able to open the link in case we're not able to see it very well. But if you would have a gander and have a comment, these patterns are broken down for the most, broken down into different areas. They're not identified really by what is foundational and what is advanced. There are a few that say foundational next to them, but I think there's a number of these that are just foundational as well. Like the notion that they're a pattern of using a service mesh to do retries, that's probably somewhat foundational. The pattern of doing chaos engineering with service mesh that might be more advanced or implementing business logic in your data plane more advanced. So to orient all of you to this and to hopefully have you influenced it and make comments on it, I'll say that this first area is most more of a recognition that there's more than one service mesh out there. And also part of the notion here is that through a pattern we would be able to show that a service mesh isn't just for well the operator or just for the developer, but is for both of those personas and is for the product owner or the service owner. And to demonstrate how that how a service owner is empowered with more intelligent infrastructure to affect the behavior of a given set of services through configuration. And so that that's kind of what this group, this collection is about. This next one is just how do you get up and going doing it either locally or remotely. Different service mesh architectures, a popular pattern of there being sidecar proxies being used. There were service meshes of past and and current service meshes that also use a node agent like a daemon set model. There's also sort of a more more recent concept of proxy list service mesh pattern there. So I won't walk through all the patterns but but take a look if you would. So depending upon how many folks get engaged and how much time there is, we might be able to assist in pointing people to resources about these patterns, having a published list of the patterns, having the patterns well described. But for the most part, I think the ask that at least for my part that I have of folks is just to look this over and see if you think that there are any patterns that are missing. All in all about 60 patterns. Cool. Okay. Well then I don't know that I'll go into this discussion. I'll point out to people if you haven't looked at SMP service mesh performance to go have a look. It's an emergent spec. But I don't think we'll jump into this, the discussion that we were going to have just because Nick isn't here to present it. So with that are there other topics? Any ambassadors due diligence? I don't think Matt's on the call yet. He was. Oh yeah still. I can check in on that. We have a call next week where we're going to be discussing how to better scope some of the due diligence projects for people that are coming in either at incubation or if they want to move from sandbox to incubation. More to come on that. Okay. I'm trying to thank you. Ed, did we have the NSM annual review? That's long since gone. Is that all reviewed in history? No, it's still a little bit underway. There were several months and then the talk came back to it and they had some really, really smart questions. Some of which you just kind of go, I feel stupid for not having answered that. So for example, they asked what versions of Kubernetes do you support with your releases? And we've done sort of very broad testing across different kinds of environments, various public clouds, etc. So I went back through on that question and actually re-ran our CI going all the way back to 1.12 and just for good measure going all the way forward to 119. And we basically in the course of that found all that was always well across that range and got that up on the release notes pages and then commented to do that. I've still got a few more questions they've asked for. Unfortunately, to really do a good job of answering some of their questions, I'll need to draw some pretty pictures because they're asking about things like what new features came in here. And I can do sort of a dry bullet list, but it's not going to help the talk, I don't think, because they're not day-to-day in the muck. And a couple of pretty pictures actually on the release page, you should make that really clear. So that's kind of where that is right now. So I've still got a few more things to do to answer some of the questions they came back with, and then hopefully to go back to the talk. Okay, very good. Curious, any... 1.19, not 1.9. Oh, sorry. So good. Any questions on governance or in and around governance? Not at this time, no. They thought actually asked any questions on that at this time. Well, I'm trying to rack my brain. Do we have other topics? We've got a lot of other projects. There's a number of projects that I'm not able to keep pace with. I was talking to a Google Summer of Code participant that just got done working on some machine learning in core DNS, and the use case, I think, was about just kind of DNS blacklisting, DNS filtering, and the machine learning around that. And for my part, I always enjoy hearing about interesting projects or getting updates from the projects within the SIG. So we've yet to establish much of a cadence of having the occasional report from those groups. Ken, Ed, Amy, do we... Is that... Is getting report... Is having some of the, you know, maintainer from each group, from each project present once a year, twice a year, not to put a burden on them, but just inviting them to do so as a forum for highlighting, you know... Yeah. So I mean, my experience with that sort of thing is that a lot of it comes down to the tone with which the invitation is extended. I've seen people get very excited about, hey, you know, we have a venue here. We wanted to offer the opportunity for you to come in on the cadence that's comfortable and tell us about your progress and sort of spread the word a little bit. That invitation goes super well. We have scheduled you for blah, blah, blah date to show up and report to this committee about that invitation goes really badly. It's remarkable how much of a difference tone makes. Yeah, and kind of to follow along with that. One thing that has kind of changed as far as like people getting talks at KubeCon, which I know is a huge thing, we've now limited projects down and SIGS down to one 35-minute session because, frankly, we're hearing from like community feedback that there's too much content and it's overwhelming. So being able to extend the opportunity for projects to be able to come and showcase their work in another format that still is available to all to be able to come and participate and go up on YouTube would actually be quite welcome. So yeah, I'm curious. Yeah, and the other piece of it, quite frankly, is KubeCon is a giant ball of content all at once and that's intensely overwhelming. But the, oh, this week in a week where not a lot else has happened that I've noticed going by that I'm going to waste my time in, there was this talk that's being given or had been given last week on this topic over here. That looks interesting. That's often much less overwhelming for audiences than we have 500 talks. Which one would you like? And Ed, you characterize it really well about the intent of what I was asking, which is mostly just like, my goodness, there's a lot of very interesting things going on in these groups. And I don't know that they know this is an open forum to, you know, a platform to elevate their works and to get some feedback on their works. Well, again, quite honestly, I mean, I'm literally in these meetings almost every time and it would never have occurred to me to come and do that here. You do sometimes get this thing that happens with people and it varies by personality, where it's like, oh, the SIG is doing important stuff. So I want to be very respectful of its time and sort of treating it as a platform to promote my projects for some people that might feel forward, not everyone, but for some people. And so just the, you know, like I said, I think the well-toned invitation could be really powerful. Yeah, good. Yeah. Actually, I'd gotten a question from Jim, who's the last one. I think it's St. Lager of, currently of Intel. He's been on these calls a few different times, has asked some good questions. He'd recently asked about the relationship between the CNCF SIG network and it wasn't the Kubernetes SIG network, it was the Kubernetes SIG network multi-cluster or some, one of the many networking. It does get difficult to keep track of all the subcommittees in Kubernetes. It's a very effective way for it to internally self-organize, but it does get to be a bit of a thicket for folks who are just trying to casually track from the outside. And so that was a good reminder to me that, what you just said, is that folks don't necessarily know that this is an open venue to them or that... Well, and one thing, I don't know how much work it is, that might be helpful, is just having an index of, these are all the things going on on a regular basis that CNCF SIG networking thinks might be of interest to people who are interested in networking in the cloud native space. So here's the relevant meetings that are happening in the Kubernetes networking SIG and committee space and here's these community meetings that are happening for these communities and that kind of thing. It's sort of like, I really love the CNCF landscape, but it has gotten to be so big and so daunting that maybe just sort of like a smaller scale, hey, this is the CNC and probably not as graphically designed. This is the CNCF SIG networks notion of the landscape of cloud native networking. That might be interesting. You would not be the only SIG to be put in together a landscape as well. It's actually quite easy to be able to fork and build your own. Oh, see, now you're just tempting us. Welcome. Nice. Hey, I have suggested something and you can take it as you want to. And Lee, I've also dropped in an enthusiastic guest to be able to help projects come on over. If you need help getting in touch with them, I'm happy to help. Thank you. Beautiful. And was that the suggestion, Amy? No, the suggestion was if you wanted to look at the landscape, I'm pretty sure that SIG app delivery is also kind of running down the same path. Nice. Nice. Yeah. Okay. I think for my part, I think we'll take you up on one of those offers. Good. Well, as we can, any other topics that you think we should cover today? No, I think we covered everything well. If we could. So we're going to be giving time back. But before we do, since I got to say, my hand's been sort of shaking. I haven't had like actual human contact in a while. I didn't get my cube con in. And so I've got to be friendly and social a bit. It's a yash. Since you're on the call and Thomas, do you guys mind introducing? I'd love to get to know you some and just what what you're into, what your focus is. Yeah, sure. I can go first. So I work in VMware and I'm working on the Kubernetes on vSphere product. And so actually, so this is just a way for me to keep in touch with what you guys are doing. And so I've just been attending some of the meetings. So this is just me looping and then figuring out some of the things that you guys are working on. Yeah, very good to have you, Yash. For my part, if any of this strikes an interest to you, you know that your comments are very welcomed. For me, I think so to give you more of a background, I'm working on the Qubelet equivalent team. We run equivalent of a Qubelet virtual Qubelet fork on ESXi. So I'm working on that team. So a lot of the networking is a bit higher for me, but I'm just interested in learning about it at this point. Yash, something else to maybe click on and spend 30 seconds looking at is this. There's been a collection of folks who've been working on helping define cloud-native networking and some of its concepts and principles. It's a resource. It's been a point of discussion in this SIG in the past. So there's a link there. And then Thomas. Sorry, do you hear me? Yep. Yes. My name is Thomas. I'm a staff engineer at Dana Trace and I'm an observability company. I'm dealing almost with cloud-native and community staff and have a strong interest in networking. So I worked as a network technician about 10 or 20 years ago. Yes. And I want to get more in depth about service measures and cloud-native networking. And this is why I'm here. Very good. Very good. Well, Thomas, I'll try to catch up with you in Slack just to give you, to socialize a bit, give you a breakdown of some of the activities that I know about that are going on. This will be very cool. Fair enough. Very good. I'm going to go shake down Michelle and Nick for not showing up today. Other than that, I think that's a wrap. See everyone in a couple of weeks. Thanks, everyone. Thanks, everyone. Bye, all.