 Cool. All right. So welcome to 2021 and let's kick off this new year with an even better first meeting. The agenda is quite light here, but yeah, let's skip that button. Thanks a lot for subscribing, Bridget. And I see your first item is actually yours. So maybe someone else wants to support you subscribing while you walk us through the release. You know, I'm not sure if we're going to have a lot to note here in this particular case. I wanted to make sure we talked about and then we'll probably continue to talk about until we get there. Next is my release. And I have a few links in here a few notes. We did it for a quest some time ago and got him to add more detail. This is one of my colleagues and one of Phil's colleagues who was looking to use that V1 alpha four version of traffic specs API. So that kind of led to the discussion of, hey, should we be cutting a release? It's been a while. And then I know Stefan had written a gist about documenting the release process. And yeah, I just kind of wanted to open the floor to, does anyone have any actions they want to take or items they want to add or, you know, feedback for Stefan's gist. Yeah. Before we get to that, I just realized I totally forget to ask if we have any new members anyone who has joined us the first time today. Yes, I was thinking. Yeah, please introduce yourself. Quick, quick introduction. I'm Michael and Bridget. So my name is Amin. I recently joined AWS as an SD. I work on the Kubernetes and serverless area. From time to time I work with the amazing Michael on the AWS controllers for Kubernetes. And yeah, this is my first meeting about with the SMI. I hope to learn with you guys and do amazing things this year. Thanks a lot. Welcome. Anyone else. Welcome. Hi folks, my name is Sungru. I work for Intel as one of the tech lead, and that's my first meeting today. So recently got started on looking into service mesh and so just learning the whole concept and figuring out how service mesh would work. Our primary focus is tuning service mesh towards the telco application models. And that's what you're looking into. So I'm just getting started into whole SMI and what service mesh is. So looking forward to learning more. Awesome. Welcome. Can you let us know where you're based out of time time zone wise just to get a bit of an idea. Yeah, I'm based in Portland, Oregon. So basically Pacific time. Okay. I'm based in Luxembourg. So an hour more than Michael. Thanks a lot. I think that's it now right I forgot anyone else. Okay. Cool. Sorry for the interruption. Oh, no, it was good. Glad that we did that. So anyway, I'm not assuming that we're going to decide anything on the call rate this moment, especially because Stefan wasn't able to join us today apparently, but I wanted to make sure we started thinking about what is our next release look like. Michael, I know that you had some input into that in the past. What are your thoughts? I keep muting myself. Yeah, I think that we should, you know, really come up with a little bit more of a structure and the cadence. Otherwise, it's kind of this, you know, big like oh scary thing once a year and you version or whatever. You know, smaller batches and release more often, especially if we're not doing breaking stuff right if we if we get something done and there is no good reason why not to cut a new version and I think, and maybe that's a separate topic that we decide I mean given this two weeks that you know, maybe once a month or once every two months we have fixed releases right and even if it's just one new tiny thing we get out okay that's that's fine. But I at least that was my perception from from last year a little bit that you know, it's kind of like took quite some time it's not a criticism it's just like an observation that I'm sure it's probably easier to do smaller, more often than you know these big bang once a year releases. I don't know if this that's mine. Yeah, I mean, I guess because this is a spec. We don't want everyone who's trying to implement it to feel like it's a moving target that they have to reimplement every two weeks I mean we don't want to strike that kind of fear in everyone's hearts. But at the same time if people have been waiting since October for something to be in the spec like we we don't want to put that waiting you know what I'm saying we don't want to. We don't want the spec to stand in the way of the implementers moving forward either. So, I guess I'm asking the community I'm interested in what everyone thinks is like what is the right balance there. I guess that my emphasis on non breaking like if something is like okay here we need to clarify this bit or you know, there are some some contradict whatever it is that that essentially helps people to implement it better faster or whatever. That's the push, not necessarily saying like okay. Yeah, why should someone wait for for half a year I get to the okay moving target thing or idea that's that's definitely not a good thing but yeah on the other hand having having too much. I'm good by between two releases might might not be a good thing either. So, I mean, are we talking about the difference between like clarifications and bug fixes. I think those should be going out all the time, since we're talking about documentation, you can always just be refining the things that we've already talked about. I can see new features, I can see new features being scary but you know we're also sitting at alpha, you know, I mean, like this new spec is going to come out and like from engine x perspective we're not going to immediately just uplift all the code, right to support it like we're going to take our time to to consume it. If other releases come out we can always leapfrog to I'm a little bit more sensitive about beta and then in an actual like production level releases that I am about. Absolutely. So, I kind of lean towards just my opinion, but I kind of lean towards quicker cadences on on alpha because I'm not that scared and I expect things are going to break at some point. Right. And I guess my main argument for a set release cadence, at least, you know, I don't know, once a month or whatever is that you know it doesn't mean that with every release it has to be a big thing maybe it is really a tiny thing, but we don't need to have a discussion like when will the next release be it's like clearly this is going to be there, and maybe a certain feature a certain fix makes it into that release or maybe it business to train and goes into the next one that's fine, but you know it's just like it's clear every last Monday or whatever we're doing a release, and that's it right so then then we can still decide like you know is that feature x or whatever ready already for the next cadence or the next next cutting the release or not. That makes sense. I'm Phil, since you represent team that is doing an implementation with OSM do you do you want to address like what you think about Michael's idea of having like fixed dates for spec releases. Yeah, I'm found that. I know we we try to, well, I guess what's the cadence and are we talking monthly, quarterly. I mean that's what that's what we're deciding. You know, that works I mean I guess if there's anything that like this experimental right you can put a feature flag and etc, you know. But yeah, that works me I think that will give the community a cadence of when they can expect things to be available. We're putting that in the change log, etc. So, yeah, I'm on board with that you got my vote. Yeah, I mean I think with that cadence definitely concur with the case with that timeline actually looks like but expected releases. So I agree with Phil and Michael on. And I didn't mean to resolve it today I what I meant is that, you know, we can say okay, this is the proposal this is what you know people who are on that call are fine with. And next time we meet we could put that you know to a vote or whatever thing like okay, can we resolve that and then use that as the first you know, try out with that. All right, that sounds awesome and that is what I wanted to discuss. And thank you so much for putting the time to that. Back to you Michael. Cool. The next thing I see here is the S1 metrics discussion or because the other one also requires Stephens input reliant release process. Yeah, it is the metrics discussion. Yeah, I just threw that in there. Yeah. So, yeah, I want to take the temperature so obviously you guys know you all know that we've been building out OSM. And so we we've kind to we've hit this kind of what do we do with the, the UI of OSM and so I've had some some deep conversations with Michelle and, you know, we feel like, Hey, let's, what if we just make the metrics SMI metrics robust enough like that abstraction layer, because you know everyone likes Cali. And then we said hey what if we can make this abstraction layer so that we can bring Cali into, you know, to look at any service mesh that is utilizing SMI. We're going to have that pluggable experience, you know, because, you know, obviously our, our cloud teams are looking at like, as you monitor those are different sets of APIs, etc. But, you know, we're trying to figure out Hey, what can we do to make this kind of modular for the community, because we know that, you know, even people that are using SEO, they like Cali. There's just a big community feel around the looking field of Cali and I'm not dug that deep. I'm starting to look deep into the metrics API, but you know, just looking through it, it doesn't look too robust at this point, not saying that it can't. But I like to see if that's something that is worth kind of specking out in that type of scenario. So I know I was rambling but hopefully everything I was kind of make sense. So, so I'm trying to understand. Oh, sorry. Yeah, so good. Yeah, go for Matt. Okay. Yeah, from our perspective, we think it's a little bit constrained to. And I've been thinking about, is there ways to extend it to add metrics into it. But then the more I think about that so like, let's say, you know, let's say your data plane provides this metric. That's not necessarily represented in the latency metrics of us and my metric. You know, how can we make it generic so that I can add this key value pair or engines that go into the metric. And it's all generic and I can fill it out. And then the more I think about that. Are we doing something that we're not really like good at like we're really like I think service mesh is really good at networking and security. Good at providing observability but should we be worried about the transport of observability and then I start thinking about open telemetry so I keep walking. Yep. Yeah, that's that's where I'm at. I haven't really concluded anything yet, but I think there's room for extensibility. Yeah, you know, Yeah, go for Michael. No, no, sorry. No, no, I was saying, yeah, no, so yeah, we talked about open telemetry and I think it's the same use case right it's like open telemetry is can still talk to the SMI metrics right to pull stuff. Is that kind of how you would think about it, Matthew or Yeah, I mean I haven't thought that deeply on it like I like I'm saying I keep going around in circles but you can. Now I'll tell you that people that are using, you know, internally to F5 that are trying to do metrics see invisibility type stuff, keep bypassing SMI metrics and going straight to Prometheus and then they're going to have to do metrics borders, because one reason is because they understand the metrics that the data playing natively supports and so they're just side stepping. Now customers not necessarily going to have that level of knowledge, but I think they eventually will. And so we're, we're really struggling, we're really struggling with saying use SMI metrics and then being told, well you're just not telling us the data we really care about. Yeah, I mean, I wasn't thinking that it would be very confined right like if we look at like Google's SRE doc like like golden metrics right it's not to be extensible, but enough that hey, I got enough data to figure out what's going on in my mesh. I don't ever think we'll have parity with, you know, all the native stuff out there. And I don't think that that will be the job of SMI metrics right I think it has the golden metrics like the 8020 rule type of stuff. Then that helps us be able to not worry so much about building specific UIs to kind of surface this stuff up we can just hit these API's and draw it up. So that makes me start thinking that we don't really need extensibility, but we do need more metrics so if we just did exactly. More error rates and, you know, I don't know all the top of my head but yeah I know. And at this point, at this point we should look at that link that Michael dropped in. Yeah, I was, I was trying to reconcile that in my in my head it's like the open telemetry project is open and you know supportive if we tell them you know XY said to include that. And metrics in contrast to traces are not yet GA they are still in flux. There is a good threads and that's part of sick of stability, making sure that metrics and and the default civilization format of telemetry are harmonized. So, to me the question is like yelling is a piece of software right like I, I, I'm not into getting it what what beyond what what is offered in 199. What would be part of your proposal or your, your, your desire here. Yeah, looking at this I mean, if, if open telemetry is the endpoint then I'm fine I guess I'm just trying to figure out hey, what's the layer that we can just grab the data from, you know, where it's a complete set of Are you talking about the semantics or are you talking about formats. The semantics at this point again like we're just trying to solve. I guess, if I could just go back like we're trying to solve not having to build a UI to to go after these apis ourselves to surface those metrics. Yeah, so then I would definitely encourage you to go to that 199 comment what you would like there and I'm more than happy to take an action item to work with Justin to implement or to introduce the desired semantics in open telemetry. And again, this discussion is super early, we just started having it but I wanted to kind of just throw it out there to see what the community was thinking about. Like, you know, this is like a pluggable abstraction layer to surface metrics into any system. Right. And that's that's where there's this big fundamental difference between premises exposition forward and do not have prescriptive semantics right there is no way you know, define what a certain metrics, there are conventions how to name metrics but there is no where open telemetry has a very opinionated way to go about that right and and I think we should if we if we want that leverage what what especially given that open telemetry is very open and supportive there. If we say, this is what we need that we just use this to miss you to see like here's our wish list. There's no guarantee that you know everything that we say there will be implemented one to one and maybe someone will say, Well, you know, this specific metric is already covered with, you know, whatever is currently four or five categories that are already existing. But at least this is something that is, you know, we wouldn't be making up a new standard but we would be building on extending an existing one. Exactly. Yep. If that is, I'm still not sure because I know Kiali I used to work at Reddit I know where it comes from and what what it does, but that is where I the disconnects between what you know the. Yeah, maybe I don't. Well, I'm assuming I'm making some assumptions I'm assuming that. And again, I'm going to spend this up here this week but I'm assuming Kiali is hitting pure Istio API is to draw the tracing. Is that true statement. And if we don't know we don't know but that's that's the assumption that I'm under. It is definitely relatively at the current point in time but it currently does not that familiar with with how you know, close to the tightly coupled it is with Istio but I believe it is pretty well. Yeah, I'm assuming that's the case and then. So we, you know, we were approaching this is okay. People like Kali that whole experience, how can we just plug and play with Kali on top of SMI or any any service mesh that is adhering to the SMI specs right so if we can surface, whether that's link or D. You know, Matthew what you're doing with index, etc. Like if you're surfing those metrics with. And again, I was thinking SMI metrics then we can build that layer then you'll be able to draw that up with whatever you know service measures under beneath that abstraction layer. And that to me sounds like, you know, like standardizing on a format, which gets back to I think what Michael was saying is, we can be working with open telemetry then. Right. Okay. To define that and, you know, I mean from my perspective. I mean, I kind of prefer that, you know, they're really taking the lead on on, you know, telemetry, you know, format the packets, the data grams of what that telemetry is, and then it allows us to then just feed into that project but then focus on, you know, I mean, we have seven networking security, and then we don't care what those we don't care about the format will just use the format that people tell us is good for them. You know, which would be open to us. You know, I guess we have some opinion. Maybe if I may comment coming in from a newcomer's perspective. So a lot of I used to work on some of the collect telemetry agents collect the telegraph. So some of the most of our customers, if I see today, they're comfortable in leveraging from ETS, and they already are looking to deploy something like from ETS Grafana in within their environments. So, one of the questions would be, hey, why do I need one more agent like one more software stack at a Kali, for example, if I have Prometheus, can I have a way to integrate the mesh metrics into Prometheus. All right, so I guess it's a, it's another, another entity that they have to look into configuring installing and managing. So instead of, I mean, if you have something like Prometheus good well integrated. For example, open telemetry format. That's where most of the folks are shifting to. It becomes easy to kind of leverage along with the rest of the stack that they have the rest of the Kubernetes applications that they have something like Prometheus and open telemetry. No, look, I totally agree with that. I think, you know, again, the level of environment you go into, right? So hey, you go into environment and their Prometheus 400 level guys, people like that works for them. But, you know, if we go to kind of like the core of what SMI is about, it's kind of like simplify the experience, right? Like, you know, Prometheus is going to give you 1000 knobs and 1000 buttons. You just go to town. But for someone who's just like, just entering into this space and they just want pretty UI, you know, simple experience, not so worried about all the other bells and whistles. Even something like Prometheus could be pretty intimidating, you know, to kind of, you know, create all those queries, etc. Exactly. I mean, I'm yet to explore how the functionality is of Kiari, but just a perspective of what's being used open telemetry and Prometheus are the common ones that everybody's moving towards. And just to clarify, the offer or the idea here with open telemetry is essentially, when we say semantics, it's essentially this kind of, you know, giving a name and defining exactly what its meaning is, right? So this metrics here, hdb.server.duration, that's exactly, you know, measures the duration of the in-part hdb request. That's it, right? So that when you see that, right, there is no doubt what it is, right? People don't have to come up with it, where Prometheus is, you know, an exposition forward around that is kind of like descriptive, it says, okay, this is the way how things should be named. Open telemetry in that sense is really prescriptive, it says, you know, this is the way how the metrics looks like. And our task here would really be, on the one hand, and Phil, if you want to take lead on that, I'm happy to support you there to review the existing open telemetry semantics that are already there that we can leverage. So here, this is already covered, and here is a set that we, you know, that are not yet there, that would be new. And then the way how they wanted, if they expand a new category or wherever they put it, that's something else, that's implementation detail that open telemetry folks need to sort out. But I'm also active in that community, I can definitely, as I said, support the process from the other side as well. I think that's in the sense of having something that is, you know, interoperable and, you know, leverages existing other CNCF projects, probably most sense in this context. Yep. Okay, cool. All right, we have a few minutes left, so I will open up the floor to any, any other business, is there anything that you would like to see is there anything that you would want, like, for example, upcoming cube con CNCF con Europe. So to bring that up, I have, I have 30 some tabs open of service mesh talks I am reviewing for that. And it looks like a lot of stuff. I'm almost, I'm almost done with my observability done. So yeah, I know that review that is coming up soon. So, yep, I've been lazy over over the last two weeks. So, holiday. It's me now. Yeah, having we all. Very exciting. All right, and if there is anything, especially looking at people who recently joined us so soon cool and I mean, you know, it's it's kind of like help help yourself right so you don't need to wait until someone asks. If you have issues there you can start reviewing you can work on whatever you like. Usually during the week we are during the two weeks where we meet usually on on slack so it's not super high verbose and busy so you know if you want to time in there and ask something or suggest something like this is definitely the way to be not going to be overwhelmed Yes, absolutely. Thank you. Yeah, try starting started reading and I'm lost in the bunch of information out there. No for sure so my interest to start off with looking into KPIs for East West traffic not so traffic and I saw some of the discussion in earlier notes. So you're trying to understand more to figure out how best to establish that from a telco perspective. And don't be shy right I mean you know fresh eyes that's always if you're longer around with something you probably don't see that you're going to it and fresh eyes always bring it in a new nice perspective on why are we doing things the way they have. Absolutely. Thank you. Cool. Thanks a lot and see you in two weeks time again. Bye bye.