 All right, well, we'll go ahead and get started. Good afternoon, everybody. My name is Mitch Connors, and I'm a software engineer with Google. And there, I work on Istio and the Anthos Service Mesh, focusing on usability, making sure that our user experience is up to par. I'm really happy to introduce this panel today, where we're going to be talking about war stories from inside the development of various service meshes that you've heard about throughout the day. And I'll go ahead and ask my panelists to introduce themselves and share one thing that is unique about their service mesh. I'll go ahead first. My name is Irina Shostava. I'm an engineer on the console service mesh. And something that's unique about the console service mesh is that it's kind of like a multi-platform service mesh, so you can maybe call it platform agnostic. Service mesh, you can pretty much run anywhere. Hand it over to you. Hello, my name is Phil Gibbs. I'm a senior PM at Microsoft. Basically, PM, a lot of our security projects, open-surface mesh being one. And I guess what's unique about us is that I think we are the youngest in the game, possibly. Hey, everyone. I'm William Morgan. I'm the CEO of Boyant, which makes LinkerD. And what's unique about LinkerD, I guess there's a lot, but one thing that comes to mind is we might be the only service mesh that does not use Envoy. Thanks. And well, I've already introduced myself. What I think is my favorite thing about Istio, I don't know how unique it is, but I just love our community. We've had 315 companies contribute just in the last year, and 11 of them are involved in leading the project as a whole. So at conferences like this or any part of the world I go to, there's almost certainly going to be a co-worker nearby that I can go and grab lunch with, which is just a really cool feeling to have. All right, so first question for the panelists. And by the way, I'm a panelist, and I'm going to be moderating because I couldn't find anybody else to. So our first question for the panelists is why should you use a service mesh? We've heard a lot about that today, but also why shouldn't you use a service mesh? Who wants to take it? OK, yeah. I think what we're seeing with a service mesh is it's becoming more like a new platform, right? So when we say utilizing it, being able to not have to kind of keep forward legacy APIs when you talk about circuit breaking, rate limiting, et cetera, it's becoming its own platform. And I know with OSM, we're branching out in this ecosystem, doing a lot of integrations into OPA and other products. So I think that's a good reason to use it. The flip side of it, what are the reasons maybe not to use a service mesh? Yeah, I think there's some misunderstandings about what a service mesh actually resolves for you or does for you. I think most of my communication with customers is about network policy stuff. They're kind of thinking old school V-Lanning. And I say, no, we're a little above the stack where you're at. So just make sure you're doing your homework and you're understanding what the service mesh is going to provide for you. Yeah, that's what I got. Yeah, so for me, I think there's exactly one reason when you should use a service mesh. And every other reason is a reason not to use it. And that one reason is if you have a very concrete and specific problem that you are solving. If you are adopting a service mesh because you feel like you need to adopt a service mesh or because you see other people doing it and you want to do it too, then you're doing it for the wrong reasons. And this is true of any technology, but I think it's especially true of the service mesh, which for a variety of reasons is mired and a lot of buzz. So whether that specific problem is something relating to encryption of data in transit, or whether it's reliability, or whether it's getting a uniform layer of observability, almost all of the service mesh projects will provide something along those lines with a lot of differences in some of the implementation details, but the value prop is largely the same. But unless you're adopting that technology for a specific problem that you understand, then you're going to end up with a boondoggle. And probably every answer I give for the rest of this panel is going to be some variant of that, so I apologize in advance. Yeah, I kind of agree with everything that's been said already. And I do think every organization probably has to do their own cost-benefit analysis of whether it makes sense for them to run the service mesh. And when it comes to benefits, I feel like there's already been some great talks that covered a lot of that already. And the benefit is really that you have the networking and the security layer, something that you would typically maybe write into your application that is kind of automated for you by the service mesh. But it comes at a cost of complexity of running it and understanding it. And essentially, it's another layer that's kind of hidden from you. So every organization kind of has to understand this is a relatively new technology, and so you have to do the cost-benefit analysis of whether it makes sense to you. Do you have applications for platforms that it supports, et cetera, et cetera? But yeah, I think ultimately it's kind of a trade-off that you would be making at the end of the day. Yeah. I think one of the costs that often gets overlooked when people are coming to adopt a service mesh is it's the same for any software component. As engineers, we're all really excited to get our hands on new technology, but every piece of technology that you add has a cost. Even if it's free and open source, it has a cost in terms of maintenance. And so the reason to not adopt a service mesh that I would say is if you don't have an intention of upgrading and keeping up to date with whatever service mesh implementation you choose, you wouldn't run a five-year-old Apache server exposed to the internet, right? There's been dozens of CVEs over the years, like there are with any networking appliance. And so it would be completely dangerous, but we do see some users doing that, effectively, that with their service mesh. It's managing their identity. It's managing their ingress. And yet they're running, in my case, I'm looking at Istio, and they're running versions of Istio that are in some cases three years old, which just I wouldn't recommend doing that. Arguably, it's a free and open source technologies that have the highest cost, especially in terms of long-term maintenance. That's true. All right, next question up. What's the most surprising use case for a service mesh that you have seen? Answer number three may shock you. So. I'll start out because I do have the story that we all talked about a lot. So the last in-person KubeCon and A, the Department of Defense demoed a system where they're running Istio on an F-16. So one of the things that I, well, I'd say I pride myself, or I used to pride myself on is that I run software that can't kill anyone. Like if it crashes, it's not a car accident or a pacemaker that doesn't work anymore. But apparently, that's not entirely the case anymore. That was definitely a surprising use case. Not sure I quite have my head wrapped around why the F-16 was running Istio, but apparently it was. Did that improve it? I guess either way. Yeah, that was a weird one, even from the outside looking in. I'll say on the LinkerD side, I am often surprised by what people do with it. You know, because my background was very much in the world of we're building an API to serve like, you know, calls from people's, you know, cell phones, you know, from their apps and stuff. And that was what I was used to. But you know, we've had LinkerD be used to, for like train switching and stuff, which, you know, hopefully that works really well. We've had it used in medical devices, you know, that people rely on, we've had it used. Actually, I gave a keynote last time about all the ways it had been used to help combat COVID-19 during the middle of the pandemic, you know, in all sorts of interesting ways. So it's continually surprising to me, I guess, and kind of gratifying too, because people are bringing it into new situations and solving new problems that I wasn't even aware of. What are the few joys in open source? Yeah, I think mine's probably a little boring, but I think my touch is on some of the integration. So we were working with a customer and I guess what service mesh kind of, you get to this crossroad between the ops and the devs, right, and who actually controls policy. So this one particular customer actually wanted to use the OPA integration, because they said, hey, they don't trust the developers. So they actually had a policy in OPA to just ensure that the HTTP payloads and all of that was in line, right? So I just thought that was interesting, that someone was watching the other group and back and forth, you know. Militious developers, what could go wrong, right? Right, yeah, you know. But those are good, yeah, it's a good use case. I mean, they just speak through the added granularity that you can do with these technologies. I think we can all agree developers are terrible. Yeah. On the console side, I think the one kind of use case that was surprising to me, and I'm gonna mention it at a risk of adding another buzzword to the mix, is that one of the blockchain companies wanted to use a service mesh and I did not think that this was possible, but I thought it was an interesting use case. Yeah, all right, next question. What challenges do you see most commonly from people who are trying to adopt a service mesh? Go ahead. You sure? Yeah. Well, I think we can all agree that a service mesh is kind of like the canary of where you're at on your maturity journey, the Kubernetes, you know. But I think the biggest thing we see is people always want to scream at latency when a service mesh gets turned on and then you kind of ask them, what was it before? And they don't typically have those metrics prior. So what I would say is, hey, if you're on this journey, just do your homework, start getting the math, start getting the metrics and then just kind of crawl, walk, run type of approach, right? Just start to slowly enable things and get used to how it works and see what the impact is in your environment. I don't know what it was before, but it's definitely worse now. Yeah, yeah. Well, what was it? I think one of the challenges that we're seeing is related to kind of different deployment architectures and how different organizations, depending on their size, like how many Kubernetes clusters they have and some organizations end up having like quite a few clusters. And if you are in the situation, then running a service mesh control plane in each of the clusters could be quite a bit of operational overhead. And what we're finding from those users is that they kind of want to have some notion of multi-tenancy where they would have kind of a shared control plane, whether that's deployed somewhere else or maybe you're running it as a cloud service or what have you. And then like having those multiple Kubernetes clusters more, mostly just having the data plane. And like kind of supporting these organizations in these deployment models is been kind of a challenge, an ongoing challenge because it keeps changing. Yeah, so I have kind of a long answer to this, which is like, I've been doing service mesh stuff since like 2015, I think if you trace it back. So I've been on panels like this for over five years, basically answering the same question, what's the biggest challenge to service mesh adoption? And my answer, what's astounding is that my answer continues to be the same year after year, which is that there's so much hype and there's so much noise and there's so much confusion that I think as someone who wants to adopt the service mesh and wants to do the right thing, like it's really, really hard to know what you should do. And I don't know what it is about the service mesh, but it just attracts that type of thing. I mean, look at this conference, right? Can you be in this room for more than 30 seconds without smelling brimstone, without smelling the pungent aromas of vendor marketing? Where are the end user talks in this conference? It's like all vendor talks. There's three end user talks. Every other talk is from someone who wants to sell you a service mesh. And that is a sign of a sick technology. That's an unhealthy relationship. That's a junk food relationship. That's the candy bar that you see the commercial and you're like, oh, that looks so good. And you taste it and it tastes great for about 30 seconds. And then you're left with empty calories and a sugar hangover. And like two years later, you're still trying to get the service mesh to work and it never lives up to its promises. And not only have you not kept your promises to your team, you've let them down. You've left things in a worse place because you're boondoggled, you're saddled with this technology that you can't adopt. That's a real tragedy because it doesn't have to be that way. There is a path to salvation. There is a path. It is a difficult path. It's one that is not for the faint of heart or the weak of spirit, but it is a path. And that path is understanding very concretely the problem that you were trying to solve. And then using that as a site to cut through the waves of vendor marketing until you understand exactly what you're trying to accomplish and exactly what the options are for accomplishing it. We might need to update your LinkedIn profile service mesh profit on those lines. We can work on it later. It spoke through me. The spirit of the service mesh spoke through me. You know, I mentioned seeing a lot of users kind of install and forget in Istio and how concerning that is. And we started asking users about a year ago why that was. And what we found was that upgrading is a really hard thing in a service mesh. You know, the best thing about a service mesh is now you have a proxy that's running everywhere. And you've got to upgrade it. You've got to patch it when there's CVEs, when there's a new version, you've got a control plane that you've got to take care of. And so the whole last year in our project, we've really had a renewed focus on sort of day two operations and what it costs to maintain an Istio installation. One of the biggest things that has changed. I mean, we've worked on making it easier, but we've also worked on making it so you don't have to do it as often so that now you have a proxy that's running everywhere. So that now users, instead of having to upgrade quarterly, they can upgrade every six months. And it's too soon to say, but we're hoping that that will encourage users and take one of those big pain points of owning a service mesh and make it a little bit less painful for our users. All right. What's the biggest ongoing debate in your service mesh project? Me, you want me to go again? Didn't learn the first time. We all live in perfect harmony on Team Linkardy. So we basically, you know, we just ring the gong and we all just write the code and there's no arguments. No, I think for us, we have a continual challenge which is we're trying to make Linkardy really, really simple and we're also trying to make it do a lot of stuff and those two things are kind of at odds. And, you know, it's not a complete dichotomy. There's ways of introducing features, you know, which add a lot of complexity and there's ways of introducing features which add a small amount of complexity and there's different types of complexity, you know, the operational complexity versus configuration complexity versus maintenance complexity and all those things kind of go into a big complicated decision matrix. So I'd say as a whole, our biggest challenge, you know, it's how do we balance building the set of features that are gonna actually move the needle for someone but not saddling them with something that's really complex. And, you know, even something like, that just manifests in so many different ways, even something like WASM has been a really interesting, it's been a really interesting discussion internally. So we don't have WASM support, we've held off on it so far, not because WASM sucks, I mean, WASM's awesome but we had these particular experiences with Linkardy 1.x back in the olden days built on the JVM, right, like which you would never wanna do, never build a service mesh that way but the one thing that the JVM gave us was this plugin architecture where you could load these, you know, runtime plugins and, you know, it had this nice memory model and it was clear how everything worked and, you know, and what we saw was a lot of people shot themselves in the foot in like really severe ways because you'd introduce something into every request path that was doing something, you know, and most of the time it would work and then every 1,000 requests it would like, you know, delay for 200 milliseconds or whatever and so that experience has made us a little gun shy of, you know, complete data plane plugability. On the other hand, there were a lot of good use cases for that so how do we balance that? I don't know, I would say that as a whole that's probably Lincoln's biggest internal debate. Yeah. I would plus one it. Easy. I mean, you know, there's so much gravity coming from a lot of these new projects like WASM, PVPF and people wanna see it in action and it's not easy. And there's bugs and there's like yeah, yeah, so you gotta manage all this stuff, yeah. So yeah, so I think that, you know, from a development perspective, yeah, it's really kind of weighing all the technologies and then trying to figure out, hey, like William said, is this a technology that we can attach ourselves to and that's gonna be durable and really be something valuable for the end user to use? Yeah, and the point about the end user is a great one because those are the poor bastards who are gonna have to deal with this in the long run. It's not us, we ship CDs. Like we cut a new release of Linkerdee on Friday and then we go home. Like, go ahead, run it, let us know how it is. You know, that's a very, very different relationship we have to our software from what the user has to our software because the user actually has to operate it. Well, it's different now because we actually run Linkerdee these days but for a long time we didn't run it at all. So, you know, having that sense of empathy for the end user, that is critical to making any of these decisions because if you don't have that, you're just, you're making a problem for them. You're making a big mess. Yeah, I definitely agree with that and maybe something that is unique about the console service mesh is that it also runs its own data store on Kubernetes and that comes with its own kind of challenges that we're constantly thinking on how to improve because that obviously comes with, again, like the operational overhead and how do you do upgrades and so on and the ongoing debate I wanna say on our team is what is the best way of making it easier for the user and not making it so hard to, you know, worry about that data that you're running in every cluster. Yeah, we care about the user. I think that was... Yeah, you know, there's a thing about users too. Internal to Istio, we've got this, you know, Kubernetes API system using their storage, which has an amazing system for version management and maturity of your APIs, but what we learned is once enough of your users have adopted an API, it does not matter what you called it. We could call it alpha, don't use not ready for production slash virtual service, but if 10,000 users adopt it, it's not changing. We're not going to change it. It doesn't matter how much pressure. So there's constantly, you know, we'll look at things in the project and say, oh, in hindsight, we could do that a lot better, but we could also break a lot of users by changing it. So sort of balancing the need to improve and learn from our mistakes, as well as the need for users to have stability in the project is a constant tension. All right, well, let's talk about the war stories of the future. We've talked a little bit about what's gone on in our projects over the last few years. What are the war stories you imagine we'll be talking about at Service MechCon 2026? Oh man, they're still doing it then? Well, you guys signed up. You all have to return. We're going to do a sequel panel. I thought we were going to EOL this thing. I feel like we kind of talked a little bit about that, but I feel like a while ago, there was a conversation about like running multiple service meshes and how, like this is something that users are going to want. And nowadays, I don't think it's as active of a topic, but I'm wondering or I'm hypothesizing that this is maybe something that will come back once the service mesh becomes more mature and people will end up in situations where they're running Istio, Linker, DOSM, and console in one organization. And I'm curious how like, what are the solutions that are going to emerge to solve that problem? Oh, the future. Oh no, I might get a little controversial here. You know, I think, you know, again, with end user empathy, right? You know, I see a growing concern from the community about some of this functionality just being normal primitives inside Kubernetes, right? So that's probably a really, really long debate and with SIG groups, et cetera, but you know, I can see a crossroad where those conversations start to get very, very serious, you know? Nice. And we had in one of our kind of pre-panel discussions, we had a really good conversation about plumbing and how, you know, if you think about your house or your apartment, you know, basically the plumbing just works and you know, if it doesn't, then like, it's terrible and you call a plumber and the plumber comes in and does something and for the most part, you're not really an expert in plumbing. You know that it's there, you get the benefits of it, you know, like you imagine there's pipes and stuff somewhere in there, but you're not developing any expertise in plumbing. And so I think the way, what I liked about that is kind of, I feel like whatever the war stories are about the service mesh in the future, hopefully they're not the same stories that we have now and hopefully, you know, the service mesh has moved to the level of plumbing, so it's, you know, there are service mesh experts out there but there are many, many more service mesh users out there who are not experts and today you kind of have to be an expert, you know, to operate it, but that really, that shouldn't be the case, right? You should, it's not really unless you're working at a company like Boyan, being an expert in the service mesh, it's not really your job description, that's just like an unnecessary, it's an unfortunate side effect of the fact that like, service mesh is kind of complicated right now, so a lot of what we have been trying to do with Linkerdee and also on the Boyan side is, can we make this so it really is like plumbing? And so you get the benefits, but you're not paying the, you know, you're not paying the price and there's still plumbers in the world and like there's still poop, I guess, that like goes through the tubes, you know, that's the traffic and gets encrypted and whatever else, but you're not really thinking about that for the most part and so, you know, I don't really know what those war stories will look like. I hope they'll be talking about higher level abstractions. I hope they'll be talking about like, you know, oh, you know, policies and how we got these like, you know, server policies that conflicted with each other and you know, therefore the two teams got really angry at each other because their namespecies couldn't talk to, or something, but like hopefully it's not talking about like the specifics of, you know, our proxy implementation or whatever. Like that stuff should all fade. If we've done our jobs right as service mesh creators, that should all fade down into the infrastructure. William, I've heard you use the plumbing analogy a few times. I think actually I heard it last time we were at service mesh con here in person in 2019. I have a lot of plumbing issues in my house. Well, so how do you think we've done so far? Like as an industry over, it's been a few years, how are we doing? You know, I think we're getting there, but I still feel like, you know, even for my own beautiful, perfect service mesh, LinkerD, the best one, like it's still really hard to operate. It's unnecessarily hard, you know, and it'll work fine for long periods of time and then you have to do an upgrade. Like, you know, then you gotta load a lot of stuff into your brain to do that upgrade. So, okay, well, can we make upgrades like totally seamless and transparent? Can we write an operator that does it for you? That's all stuff that we're starting to build out. I would like to get to the point and I think we can get to that point, but we're still ways away from that. And so at least in my little survey, you know, I'm not happy with the state of LinkerD today because I think it is still too hard to run and the people who are really successful, who have like these amazing talks, we've got a great talk tomorrow from Intane Australia about how they 10X their throughput using LinkerD, it's awesome, but they are service mesh experts and they had to learn a lot about LinkerD to do that. And I'd like to have those same stories happen without anyone really understanding the internals of LinkerD, you know, other than like, I guess, the LinkerD experts who, I don't know who are the plumbers in this analogy. I'm curious, Irina and Brian, what do you guys think of the plumbing analogy? Is that where you guys see us heading? Kind of where your projects are going and do you see any initiatives that you guys are currently working on? It would get us glorious plumbing. What more could we ask for? I just bought a house so I could really relate. Right! To the plumbing analogy. We're working on some wavelength. But I do like really like this analogy because it really makes me think of like what would it be? What would it look like for me to just call a plumber when it comes to like a service mesh? Because right now it feels like you do need to build that expertise whether this is something that you do internally within your organization. I think that probably is something that happens most of the time. But I'm wondering like how will this evolve and whether like people will decide to kind of upload that knowledge to maybe something like a vendor or a cloud service or will they decide to still like build that expertise internally? And then on the other end like what can we as like service mesh producers do to make it easier and is it really automatable or is this a problem that can never be truly automated? One thing that kind of going back to your question Mitch that we're doing on the console side is trying to make it easier for people to like diagnose problems. Because right now it's sort of like imagine if you had a plumbing emergency but there were no plumbers to call and you would have to figure it all out on your own. That's how it feels like right now. And what we're trying to do is build tools to help people kind of diagnose and fix those faster. Yeah, I think that's a really good point because actually plumbing kind of sucks, right? Like you have no idea what's happening in your house. Water's flowing around, you have no visibility into it. There's all sorts of things that you know we can improve plumbing too. So maybe that's the next startup. Yeah, I spent $6,000 on a plumbing leak over the summer that just drained into my crawl space. I would have loved Grafana for my plumbing. Like that would have been really, really useful. So I guess as I think about kind of the plumbing model for Istio and our projects, one of the things that we've all talked a little bit about is Kubernetes APIs. You know, when you move into an apartment you expect that it has plumbing but you don't really ask who the manufacturer was. Like whether they use Plex connectors or copper pipes or I mean maybe you ask about lead if it's really old but for the most part you don't care. It has the same interface, you use it the same way. I think the way that the industry is going service meshes can be the same way. You don't necessarily need to think too carefully about whether it's Istio or Linkerdee or Consol or OSM or Kong or I'm gonna miss some, I can't name all of them under the hood. You're simply able to use like the Kubernetes Gateway APIs and whatever the folks at Kubernetes Networking come up with yet next to use the service mesh. You use it and you move on and stop thinking about the underlying implementation. How do you think this will be possible in the future? Well I think we get a hint of it today in the Gateway APIs, right? They're a V1 Alpha 2 I think in Kubernetes 122 and so they're starting to take shape. I think the shape they're in now is probably mostly what they're gonna look like as they move forward but there's a lot more to be done, right? There's a lot more to a service mesh than just gateways and ingress and so I haven't talked to the folks in Kubernetes Networking recently to know what is the next API they're gonna tackle and bring new ones but I like what they've done so far and I'm excited to see what happens next. All right, well I think that's all the questions I have. Did any of you have anything to add in before we go to questions from the audience? All right. I'm good. I think we had someone promising to read questions off the internet if there were any and yeah, go ahead here on my right and if you could speak up I'm part of hearing. I will. Okay, so the question is, I'm gonna rephrase a little bit but what sort of redundancies exist in the service mesh world to make sure that your service mesh doesn't go down? How do you run more than one for high availability in your projects? I think that there's kind of two parts to it. The one is, I'm not sure if you're asking about the control plane or the data plane. In the data plane. Yeah, so I feel like in the data plane because you're running your proxy as a side card with your application so you will have as many proxies as your application instances and I think that basically provides the high availability that you're looking for. I think you're absolutely right. It's really easy to do that at runtime in a stable state within your service mesh because you're hopefully running your application already in a highly available configuration. You've hopefully got horizontal pod auto scalars all configured for it to respond to demand and so your service mesh data plane will respond to demand as well where it gets a little bit more complicated in my opinion is during an upgrade because all of a sudden you have something that's infrastructure and plumbing for your entire thing and you're gonna, you know, you don't go to an apartment and rip the plumbing out and put it in while the water is running one day but we do that with service mesh and so it's a little bit more complicated. We've tried to use revisions to allow you to sort of flip the switch as you see fit on each service and then flip it back if you want but it's still not what I would describe as a completely highly available system for upgrading the data plane. The sidecars with double edge sword. I mean it does a lot of extensibility but it's programming IP tables and if that thing goes south you're in a world of hurt trying to figure out a bunch of stuff. Yeah, I don't think Linkerdee has anything different here than any of the other service meshes. It's, you know, we rely on the same Kubernetes primitives and like it's, you know, replica sets of immutable pods and so that kind of informs the solution space there. You know, if Kubernetes changes or if we move off of Kubernetes then like you've got different, you know, you've got different options. You know, I guess you could write your whole data plane as one big wasm plugin and then just dynamically load it. Yeah, you know, that could be the next big wave. But where would the wasm run? It'd be a data plane and a data plane. We could run Linkerdee inside of Istio. I'm sure there's a blockchain or something in there. Yeah, there's a blockchain. Very nice. I think there was another question. So I'm actually gonna ask if one of you can repeat it. My hearing is really bad and I don't think I could repeat that. So I think the question was about upgrades again, right? How to basically have a zero downtown experience. Am I summing that up correctly? Okay, the testing around it, yeah. Sure, sure, yeah. Yeah, I would probably say everyone's in line here. I mean, the development process, I mean, there's various, you know, soak tests that are doing, you know, corner and edge cases. You know, we're probably piling way more workload on than what a usual customer would do. And again, if you're soak testing, you're running it for a long time, right? So yeah, we got a pretty stringent process as far as, you know, when we enable new features and take them through the regression testing. Yeah, I like this question a lot because open source is horrible. That's, you know, we do so much testing on Linkerdee and like as many different iterations as we can and like we ship out, you know, release candidates or like, hey everyone, please try these release candidates and like people do and they find issues and we're feeling pretty confident and we'll, you know, then we'll ship a release and then like someone will upgrade three weeks later and they're like, well, it doesn't actually work if you're on this special variant of this and this and this and you're like, ah, if you had just tested that like, you know, earlier or if we had known about that like, you know, and now we have to cut a new release and that's annoying because there's all this process and it's a redo all the testing. So it's just, it's a difficult, I mean, I don't think there's an easy solution but it's difficult and I think it's particularly difficult for the service mesh because it sits at the intersection of all these complicated things, right? The network is going through it and all of the layer three, layer four like underlying substrate there, like if something's weird there, then like that's gonna affect your service mesh. It relies on all this Kubernetes stuff and like if something's weird there, well then that's gonna affect your service mesh. It relies on, you know, it's just this weird super complicated intersection. So I don't know that we, you know, at least from my perspective, I don't know that we really have a great answer. I did mention earlier, and this has made actually a nice difference for us, we started running LinkerD ourselves. So we have a SaaS product called Boyant Cloud that's powered by LinkerD and that has really changed our relationship with this product because now, you know, we have a team that is like, you know, has to wake up at three in the morning when LinkerD ain't working and like that team is highly incentivized and make sure that like, okay, I'm gonna watch this upgrade, I'm gonna really make sure I'm gonna look, you know, right, right. So we're not just shipping CDs anymore, we actually are, you know, living the whole service owner lifestyle and that's helped, but even with that, you know, even with that, we run this in one particular environment, you know, and there's a hundred different edge cases that we don't get. And so it's just a, it's a difficult process. Every release accumulates tests, but still stuff goes through. I mean, I think the unfortunate truth is like, we can accommodate for every permutation that's out there, right? And then, I mean, I think what William was saying here is, that's why we like to push RCs out. I mean, we tell you, hey, this is kind of experimental, et cetera, test it out. And we actually do get a lot of feedback from the community, you know, because again, it's hard to know every type of configuration that that's out there, you know, so get your RCs out there early, get people beating them up and providing feedback to the GitHub refos, you know? I think at the same time though, like we can't proactively test every conceivable thing that you can do with a network, right? Networks have been around for a little while. There's a little bit of surface involved there. And so testing every combination is just not realistic. What we can commit to though, is saying we will never break you the same way twice. If you've experienced an outage on an upgrade in any of these projects, please talk to us, open up a GitHub issue. It should be added to the SOAP test suite, or the different projects have different test suites, but it should be added to the automated test system in such a way that we can promise you, we won't do it again. Another way of saying this, every time we break your production systems, it's gonna be a new and exciting way. You're welcome. Was that time, or was that five minutes? Time? Yeah, I think we have three minutes. Two minutes, okay. Any other questions? Some questions over there. And again, if I could have one of you repeat it, I'm sorry. Changing ways around the multi-cluster mesh concept. I'm not familiar with all the projects of what state you are in, or not in with multi-cluster. But are there any different specific architectures or multi-cluster topologies that seem to be winning out from the user base and feedback that you were seeing? So the question was, are there any specific multi-cluster topologies that we see winning out in the end user space? I can think of one that I'm seeing quite frequently. There's a lot of variation as to why people come to multi-cluster, but one common thread I'm seeing through very many use cases, is they wanna have only one cluster to store their configuration in. We've tried out experimentally some models where you have to blast your config to every cluster that's involved in your mesh or have a replicator system. And by and large, what we found is nobody wants to do that. As a matter of fact, a lot of times your config cluster, where you're gonna be storing things like virtual services, destination routes, gateway configs, doesn't actually run any workloads. It's just an empty Kubernetes cluster for housing just your config so that developers can interact with that config without having to worry about having prod access to your systems. That's just one pattern I've seen. Yeah, I think I mentioned that also in one of the panel questions that the multi-cluster architecture that we are seeing is when you have the control plane be shared among the clusters. So you would still want the services in each cluster to connect to each other, but then maybe you want the control plane to be administered by maybe a separate team. Or maybe you just don't want the developers to have access to those configurations. And then you end up with this more hierarchical structure where you have control plane running somewhere else and then your Kubernetes cluster is just mostly having the data plane. Yeah, I think the too many patterns that I've seen is kind of a localized spillover, affinity spillover. And then the other is kind of similar what was said here is I think most of the enterprise customers are trying to follow the kind of HA patterns of the clouds that they're on. And so they want to be able to say, hey, if this region goes down, can we still operate? And so there is kind of this kind of synchronization of data and config to allow someone on the West Coast to still operate something if the main cluster was on the East Coast. So yeah, I think, I mean, it's been my experience that a lot of the multi-cluster patterns are following kind of cloud providers, kind of high availability processes. Should we take one more? Oh, we're over time. I'm sorry. All right, well, thank you guys for joining me. I hope you had as much fun as I did and we'll be around. So come up to us with questions.