 Okay, welcome to this special edition of the SMI Grinter Meeting. Today, we're gonna focus on and talk about a multi-cluster discussion as captured in issue two, one, two. And if you haven't yet, please add yourself to the Google Docs I shared early on in the chat. That's always super useful. And I don't know if you all had a chance to have a look and if not, you might want to do it now for a few minutes or three. If not, I can also share the quick screen share if I'm allowed. Okay, I'm not allowed. Can someone else who's allowed to share their screen open the issue two, one, two and share the screen so that we can have a look at that together? You should be able to share your screen now. I can share. Okay, awesome. Thank you very much. I want a window. That looks great. How about that? Can you see my screen here? Yes, we can. It's a little small for me. That might work. How about now? Is that better? Yeah, that's better. Thanks. Cool. So, Nick, do you want to say something or? Yeah, I hope you can hear me all the way. I'm trying to use my headphones and I'm not that good quality. So just to kind of get a little context. One of the things is, you know, a number of times we're almost in the middle. What about multiple times? So I think there's a number of facets on that, that the fact that S-my configuration should take advantage or should consider a multi-cluster capability, but also to kind of come up with or to promote a way that leads a way of many different vendors to come together to work towards a single specification that benefits everybody. So we created this issue to kind of settle a bit of context and to capture some feedback around whether this would be something that the community would like to see. And if so, some of the things that they would like to see and how S-my would be able to help with this. So that issue, you know, predominantly just set out the problem as I saw it and then a lot of really good folks adding their opinions in there. And yeah, that's just the kind of the context. Right, right. Thanks a lot. And I think that that makes and summarizes the issue at hand perfectly well. I will point out, in case you haven't seen it yet, I think it is this one, let me double check. Yes, that's the one. So if you are not familiar with that cap, which the cap is in itself, I think relatively recent, a couple of... Okay, so yeah, I've not responded to this, but I think there's a differentiation between service match multiplexer and Kubernetes multiplexer. And I think you've got different considerations when it comes to actually running a workload and controlling the communication of a workload. So I'm not 100% certain that this is my opinion. I'm not 100% certain that the Kubernetes cap around multiplexer is particularly applicable to this issue, but maybe that's the topic of the question as well. I tend to agree with Nick on this. So this spec from Kubernetes, right? So depending on how we are thinking about the service match, right? Without getting it even before getting into the details of this Kubernetes spec, right? So if we are thinking in the service match as something that goes beyond Kubernetes or it's above Kubernetes in terms of where it sits in terms of the infrastructure construct, then obviously Kubernetes spec is not gonna do the work. And even if it does today, it's, I don't know, right? We have to sit down and see which are the requirements and how well does it match. But there can be divergences in the future, right? Because the Kubernetes spec is gonna be focused on Kubernetes and there is gonna be a strong Kubernetes community focusing and giving priority to the Kubernetes use cases. Right, I guess before we dive too deep into was the implementation look like Michelle was asking in chat about if we have a specific like high level goals for this media in terms of like what we're trying to accomplish with this discussion. I think ultimately there is an end goal for this one. Should be a broad agreement on whether we believe that SMI is the right forum to take this further forward. I mean, I'd be surprised if we can kind of find the solution in this meeting, but I think the thing that SMI does bring to the table is that you have a group of individuals to represent the major players in the service mesh space who are already collaborating together. So I can only speak for myself and for us, you know, not for the whole group obviously, but what I would like to see out of this either we get it done today or there is a follow-up meeting is on the one hand answering the question, should we be doing that? Right? It could be that we say, nope, not going to do it. And if yes, then the scope, right? Is it indeed multiple communities cluster? Is it cross compute? So, you know, it could be a modulate running on a VM wanting to communicate with or discovery or whatever with some containerized micro service with some Lambda function or whatever, right? So do we want, should we be doing it? And if yes, what is the scope? Is it purely communities clusters? Then there is my understanding quite some overlap with what the cap is referencing. As far as I understand the cap, I'm not saying that an expert there didn't play around with it that much. It's very early days, but that would be my expectation. Should we go around and I can see what others think? Because at least like Sergio said something, but I don't know, Michelle, Blake, are there other expectations in terms of what should that meeting yield in an adult world? From my perspective, I think what you and Nick laid out in terms of goals makes a lot of sense to me. If there is some opportunity and if we feel like this is the right time, I'd really love to hear different experiences. Sneha and Dalyan from our team have been working on a multi-cluster solution for OSM. And I'm sure there are others who have either, at least on a POC or have implemented some multi-cluster solution for their own implementation. And it'd be really great if we could align on, okay, what are the problems that, what are the common things that we have to kind of like figure out? But that may actually be even deeper than what we want to, what I've heard we want to do today. And so I don't want to get too in the weeds if the conversation lends itself to that, then that's great, but if not, it's okay. Okay, cool. Blake, do you want to share something? Yeah, I think, you know, Nick's commenting here in the chat and I think his comments are aligned with my views is I think from what we're seeing is that service mesh is more than just Kubernetes. A lot of organizations that I speak to are using service mesh today. Sometimes it's things like console across multiple Kubernetes clusters and in different parts of the business that might be using other service mesh solutions. And what they're commonly looking for across these business units or even some of these geographies is a way to connect those services together to make those services accessible, discoverable and then provide secure connectivity across these trust boundaries, if you will. So from my perspective, this is definitely much larger than just multiple Kubernetes clusters in services when you talk to each other. It's more about interoperability between either isolated islands of the same mesh or the same mesh product or between service mesh solution. Cool. Mike, you looked like you also wanted to say something. When is it my turn? Do you want to? Yeah, I guess mostly just that in terms of like how this relates to like multi-cluster I feel like there's like two distinct cases worth considering. Like one is geo-redundancy between clusters and having like two separate clusters to point at like limited blast radius, limited failure capacity for services that are like logically the same identity. And then, and I believe that is closer to like what the multi-cluster case PEP is focused on. And then like separate from that, there is like federation between independent clusters that may have services with the same name that are like logically set like distinct, like managed by different teams, different parts of an organization, different organizations potentially. Okay. Okay, agreed. Yeah. I don't see anything. Then. And I guess just like also like looking historically of like Hamlet was clearly like an early attempt at something like this, like was there something missing? Was there something like misaligned with how that ended up going in practice? Like what can we learn from that and apply towards like what we're trying to solve this time? Thank you. And let me try it with a strawman in the straw person, whatever makes most sense. Proposal in the sense that so far, at least the people who spoke up so far, I haven't heard anyone objecting, saying like, oh, that's an awful idea. We should not be doing that, right? It doesn't make sense or whatever. So let me propose, and that's mainly for scribing because I, oh, sorry, Dylan or Delian, sorry. Yeah, Michael, I didn't want to interrupt you. I'll go after you finish. No, please, please go ahead, please go ahead. Oh gosh, you are onto something incredible. Okay, so I actually, my gosh, I'm so sorry that I'm interrupting. I wanted to hear what you were gonna say. So one thing that as a practitioner coming from trying to develop open service mesh, and this is in our GitHub repo, I've been thinking about multicluster and I focused on one particular scenario. And I really appreciate you that you wanna split up the various kind of topics. I think that all of them are important. We could probably work on them kind of in parallel. For instance, federation and bringing in various different payloads. Yes, I wanna bring in a VM. At the same time, one particular scenario that I'm focusing on, and I struggle because SMI doesn't have that yet, is I wanna have two clusters and essentially use them as, just increase the capacity of my existing cluster, right? And so have one service across two clusters, two pods on two different clusters, same service. And SMI doesn't have that. And so in my mind, I'm thinking, I can work probably on SMI to make SMI that. In parallel, I can also start thinking about, how do I bring in a VM? And that's kind of a similar, but a different problem, right? And both of them can probably, that they're two different forks, not really orthogonal to each other, but it could be worked in parallel. And so I'm really focused on that, two clusters, same service on both of them. I think that's something that folks would want to do. And I see the problem is twofold. I think one is that we have to come up with a way of supporting multi-cluster within SMI configuration. So in the same way, do you have the concept of namespace? Do you have a concept cluster? I don't know. You know that. Let's not get into the solution space right now. I just wanna take it step by step. For now, just hearing, is there any objection to, yes, we should be doing it. And if that is not the case, then we can say, all right, this is our position. We want to do it. Then it's a matter of how we scope it, what we focus on, whatever. But let's, before we get into, what exactly should we be doing? Should we be using this versus that? Is there anyone else on the call that I may have overlooked or overheard or whatever who has an opinion and might have an opposing thing? Like, no, this is not a good idea. Not a good use of our time or whatever. And speak up now or, if I will be silenced or whatever the official words are. Just cause Sneha has been working on this. I just wanna ask Sneha, do you have anything to add here? Oh, no, nothing at the moment. Okay, thanks. Cool. Then I would put forward the straw person proposal saying, at least the people who are here, I think it might not be everyone. So we can go back to the entire SMI group and saying like we say, we recommend that we indeed want to do that and should be doing that. And we can now move forward to the scoping part. But I don't see the screen, I have the screen share on. So if whoever scribes that or volunteered to scribe wants to put that down and then we can essentially resolve that one and say, yes, we have one concrete outcome of today's meeting, we support that we actually want to take on that challenge in whatever form or shape. Okay, so is that, are there any objections to this proposal, to this proposed resolution of the question, should we be doing that in whatever form? Going once, going twice. Okay, then we take that as, yes, we want to do that. Then the question is scoping. What exactly are we looking at? Our, what Nick called heterogeneous workloads there, what I usually call as cross compute. Like where do we want to draw the line? Do people have any preferences, any customer based data points that we should be focusing on one or the other? Is that something that we can actually meaningfully discuss in 10 minutes or do we need to follow up? Like what are, how can we approach this coping? Any ideas? Any... The way that I see it is that as a practitioner, if I have workload in an application, I need it to be able to communicate with my other workload. Now, if you look at the non-service mesh context of that, it might have a React app running in a Kubernetes cluster, might have a banking gateway, which is running in an IBM back or something like that, sitting in some data center. We'll have a bunch of microservices or a bunch of monoliths running in my IBM. They all connect together, and I connect them together with my standard sort of networking protocols. Now, when you've got the service mesh, it's a layer on top of that. You're obviously in some ways disconnecting those applications because you're running this higher level network on top of your base network, which gives you the reliability and security. So I still need to facilitate that same communication. My IBM-Z needs to be able to communicate to my VM and my Kube cluster needs to be able to communicate to the VM and the IBM. So all of the workloads that can be part of the service mesh needs to be adopted as part of those specifications. While that's my, obviously, for as a Kubernetes centric, we have to consider workloads outside of Kubernetes because they need to be made in order to support it. Right. So what I hear from you, and I literally mean here because you're having a really bad connection today, is you're in favor of supporting heterogeneous workloads cross-computed environments. That's what I... I would more say it around the other way that I wouldn't support anything important heterogeneous. So I think I wouldn't... Wait, wait, wait, wait. That's too complicated. You would not support anything that is not... Can you say it's... Pure Irish people, you know, can... If the scope of this specification was purely Kubernetes to Kubernetes, I wouldn't support out of this specification because I don't believe it's far and free. Thank you. That I understand. Any other opinions, ideally not using double negation, please. Any other opinions or preferences like in support or opposing what Nick just said? I take silence in general as agreement unless you have a strong opinion that we should... One thing, maybe one thing I want to highlight, right? I support that statement from Nick. So one thing to consider if we were going into that direction is... Which are the problems we want to solve, right? If we support this connectivity, connectivity is only the basics that we need. It is discovery and connectivity are the two things that minimum viable product, I would say. But then there's more, right? Depending on how do we understand this concept of connectivity and security and the service mesh and so on, there are more or less that we may want to do, right? So think about if I connect a workload and if I discover and connect workloads between Kubernetes and non-Cubernetes environments as a cross runtime, are we also thinking in addition to discovery and connectivity, are we also thinking in monitoring this instrumenting these workloads in some way? Because in the end, when we look into the service mesh, what we have now, our workloads are instrumented, right? So we have a sidecar instrumenting them. When we are looking outside of the service mesh, in some cases, these workloads are instrumented, right? If we go to Kuma, for example, they are instrumenting the workloads. But if we go to other service meshes, they might not be instrumenting them, right? And they are considered like external workloads pretty much like if we are doing aggressive filtering, right? So what we are putting on the top is really if we are doing only discovery and a routing, what we are doing really is like a directory service, right? So this is also something we have to take into consideration that is to what extent are we... I agree in terms of scoping that I would expect as a user, if I look at that, that it clearly tells me this is in scope, this is out of scope, right? It says, as we had now and I didn't hear any objections, it is not completely specific. It is across different compute, it is across different heterogeneous workloads. I don't have a strong opinion right away what should be and should not be in scope beyond, should for example, should we explicitly, strongly standardize on anything observability related or not, right? I think what is important, that they're very explicit about it. We say clearly, we do or do not take security-related things into consideration, right? We, yes, it is in scope to be able to identify a workload or whatever. It is out of scope to do, I don't know, some cert management or rotation or whatever, right? We need to be very, very explicit about that, absolutely no doubt. But I think it's still a little bit too early and then I always have the feeling, especially people who already have done work in that area that want to jump very, very quickly to the solution space. And I'm an engineer myself, I want to do that as well. But I want to always make sure that we actually work backwards from what people out there need. And if you have been working on Hamlet, for example, then obviously you're representing your stakeholder, you're representing already more or less indirectly what people, what customers, what users want. No doubt about that, right? Don't get me wrong. All I'm saying is let's not directly jump to a discussion where should we be using Spire or should that be, we need something from scratch but spiffy, compliant or whatever, right? Let's not get there. Let's first define what is this scope and we only have three minutes left, I'm aware of that. But if we could at least slice one bit of that and if there is no objections that we capture this agreement or this, yeah, in the group that we actually want to address heterogeneous workloads. I would be very happy if we were able to define that today or discuss that today and wrap that up and then use another meeting either again, dedicated or next time to address the next scoping, like what should be in scope, what should be out of scope. That's possible. Does it make sense to split them and have one group multi-cluster only a work on point one and one group only work on point two? Are we solving two problems? We can have multiple working groups. We can have any setup. I don't have any master plan there. I just wanna make sure that we're actually moving forward that we're actually getting results and for that we need some general agreement. If the majority of people hear things that heterogeneous workloads is something we want and again, I haven't heard anyone objecting to it. How we go about it, how we implement it, that's the next step, right? At least to me, but I wanna make sure that we use the time that we have to get an agreement on these basic things. And again, is there anyone on the call at least who has an argument against heterogeneous workloads to select, we should be only focusing on calling this works and going once? I think it's important, but I think we just recognize that it's potentially expanding the scope of SMI in like a non-trivial way. It's absolutely, it's absolutely, yeah. No, no, I mean, that's, yes. I think it's something else that I was hearing, I just wanna call out is whether it's a single runtime or multiple runtimes, I think it's also important to separate are these environments under common administration of a single entity or are these separate administration two different entities that are trying to connect services? Very well, yes, very well, absolutely, absolutely, yeah. Can I come back to Mike's comment or is it an objection per se? Like I'm not- It's not an objection because I think that it's important. I just think that it's worth recognizing that it is like a substantial increase in scope. Absolutely, I mean, that's why I'm so careful about it. It's like, I think everyone on the call is aware of that by implication, this dramatically not changes but enhances and enlarges the scope. Absolutely, that goes without saying, at least to me. Cool, then I would again, put that forward and see like the group present today understands that or supports the heterogeneous workloads and what I would say the best way to wrap that up for today is we have at least two concrete outcomes. Yes, we want it and yes, we see a heterogeneous and I would nevertheless essentially go back to the full group next weekend and present that saying like, are there any objections if that's not the case then we can move forward and then talk a little bit more about what is in scope, what is not in scope and how we organize our work. Should that be different working groups or whatever, however we want and the leads, the tech leads, we need a number of organizational things but for now, it's still I think a little bit in the scoping and getting everyone on the same page in terms of support for that activity. Okay, thank you so much everyone for this wonderful discussion of your input. What is the cadence of this group? Should we plan next meeting sooner than later? Sooner? We have usually every two weeks but we could and that's something I'm looking towards Michelle. Can we speed it up? By extension, yes, if we could get CNCF that we essentially use every other week as a dedicated multi cluster, multi whatever working group session, that would be awesome, right? Then we would have the cadence every other week it would be dedicated to that and then people who actually want to work on that show up and then every other week it would be general SMI for whatever we are usually discussing. So I guess that's, if I may ask Michelle for that, if that's something we or whoever, Richard or whoever has a liaison with CNCF if we can have this time slot like we did to the day exceptionally if we can kind of like regularly get that. Michelle says I'd be supportive I'm not familiar with how to schedule it in chat. We can do that offline with maybe Richard maybe like whoever we need to scribe or ask or back or whatever. It is such good energy. I would love to continue this high cadence and just solve it. Yeah, it would be definitely positive. All right, thank you so much. Sorry for going slightly over the time. Thank you so much and definitely see you next week. Thank you so much. Thank you everybody. Yeah, bye now. Bye.