 All right, we've got all three mics, so I think we'll go ahead and get started. Welcome everybody. My name is Mitch Connors. I'm a software engineer at Google and a TOC member of the Istio project. And I'm gonna be moderating this session, so I'd like my panelists to introduce themselves if they would. Is this working? Okay. I'm Netia, Danish Buddy, and I'm a software engineer on the console service mesh team. I'm Keith Maddox. I'm a senior engineering lead on the open service mesh project at Microsoft. I'm John Howard. I'm a software engineer at Google, working on Istio. Hi, I'm William Morgan, one of the creators of LinkerD and CEO of Boyant. Thanks. So our talk today is titled, service mesh maturity. Are we there yet? So if each of you could weigh in on one way that you see kind of service mesh as an industry maturing and also on one way that maybe you see that we still need to grow. Whoever wants in? All right, I'll go, I guess. Yeah, I mean, maturing, yes. Are we fully there yet? No. One of the ways I've seen that we've matured is that it's not so hard to just go install a service mesh now and at least get started on something, right? If you look years past, like even just that hurdle was quite a bit for some people. There's still a lot of effort to reduce the cost of operations though, especially beyond day zero install. But I think one of the other areas of maturity is kind of the ecosystem. Even once you get your service mesh up and running, largely that provides a lot of building blocks, right? You have identity, you have the ability to configure rich L7 routing rules, do canary rollouts, but you still have to make use of that to actually get value, right? So there's some tools, some users are building their own things, but I think the ecosystem around like just simply creating a deployment and having the automatically canary things of that nature has a lot of room for growth. Yeah, I'll say from our, from kind of the Linkerdee perspective, we've been saying the word service mesh since like 2016 or something. One thing that's really changed for us over the past year or two has been kind of the nature of those conversations where a lot of the early conversations were very open source, heavy and like enthusiastic audience members and like excited about the technology and wanting to really dive in. And that at least what I've seen has changed a bit to, there's still some of that, but a lot of the hype has kind of died off a little bit and the conversations we're having now are with companies that are trying to adopt a service mesh and they kind of know they need one and they know they need help in a lot of ways in adopting one. So it's shifted from an audience of people who are enthusiastic and want to do it themselves to people who maybe are not enthusiastic, but they see it as inevitable and they know they can't really do it themselves. And so that's been maybe in maturity more of the audience and of the technology. I think in the, are we there yet? No, I don't think so. I think there's a lot of maturity still ahead of us. I think especially when it comes to, kind of to John's point, it's really hard to get a functioning service mesh. It's like way harder than it should be. And I think if this whole system is gonna work it needs to be not significantly harder than adopting Kubernetes itself. Like it needs to be kind of like the next natural consequence and not like a whole new team and a whole new project. And so I've seen movement in that direction but I wouldn't say we're there yet. Yeah, so I think in general a lot of like the core traffic management APIs and stuff are kind of starting to show some maturity because we kind of see that implemented across most of the service meshes and now we're starting to standardize on APIs and stuff there. And then I was kind of just thinking about console specifically and where it's mature and where it needs to grow. And it has kind of, so console has existed as a service discovery tool kind of even before the concept of service mesh. And so it was kind of built with like heterogeneous environments in mind. And so I think like that's kind of an area where console kind of has been battle tested over time. And so I think as we kind of build these things and over time improve on them like we'll start to see maturity there. And then I think kind of a place where we're still growing. I guess I'll speak for console but maybe other service meshes also feel that way. But I think making it easier to use is still one of our biggest focuses. Yeah, I agree with a lot of what's been said. I think for me, one of the ways I've seen service meshes mature is that we've got it in the hands of more people. We've got more years under the belts of this word service mesh. And so because of that, I think that what we're seeing is a lot of validation of some of the problems that service mesh started off trying to solve. But we've also seen some areas where we anticipate as mesh being around, five, six years, we anticipate certain problems that would exist. And we haven't seen, or not that we haven't seen users have those problems, but they actually said, hey, what we really need is XYZ. One of the areas that comes to my mind in my work on open service mesh is around advanced use cases of PKI infrastructure, handling CA requirements, compliance, service meshes entering into those conversations. And now we're seeing customers come to us saying, hey, we know mesh is how we solve these compliances. If we know mesh is how we get there now to kind of the William's point, help us do that. And that's where I think we need to mature. I think we need to give customers an easy button in order for it to just work. Nice, so speaking of an easy button, one of the things that we've heard a lot about at this conference and has a lot of press releases recently is the Gateway API and now the Gamma Initiative, which I think you guys represent four service meshes. And I think all of them are doing something with Gamma Initiative and Gateway API. So it's cool to see that. I think we call it co-opetition, very neat. But what does it mean for a user that they can write a Gateway API for any of our service meshes? Why should that be important to them? I've got the mic, so I'll start. I am really excited at the work what we've been able to do in Gateway API and Gamma because here's what that means. Right now, if you're a user coming into Kubernetes and you learn, okay, I see the ingress resource exists. In fact, you were telling me earlier about your journey trying to use something besides STO. You're like, okay, ingress is there, it's been mature. And so they go towards ingress and then, okay, now I've got my mesh and I've got to learn, okay, instead of ingress, I've got to use this resource and then I've got to set this config and then I've got to that, this, that or the other. If we're all for traffic routing, we're not even talking about multi-cluster, we're talking about getting traffic into your Kubernetes cluster, which should be one of the more basic things that you're able to do when you sign on. And so with Gateway API, we're, you know, the people behind who start at that are reimagining the ways that we ingress traffic into your cluster. And then what we're doing with Gamma is we're looking to use those same APIs, those same primitives, those same concepts, the same models and use those for service mesh. So that way, there's not that gap, there's not that jump from this resource to that resource to this resource. It's one set of resources, one set of configuration for traffic routing for we're exploring policy, all the relation policy. And that can be consistent across Gateway or across ingress and mesh use cases. And so from a user experience perspective and an education perspective, that's very exciting to me. Yeah, one thing I wanted to highlight, I mean, I see people, you know, looking at a neutral standard and thinking like, oh, that's great, I can move between service meshes or run two service meshes, but I don't actually care about that because like almost everyone else, I only run one service mesh, right? So I'm fine with just using their APIs. But I think that actually, there's still a lot of benefits of the Gateway API for those types of users, right? There's still a huge ecosystem around APIs. So that's everything from documentation, tutorials, YouTube videos, that's integrations that are doing things like cert manager or external DNS or Rgo CD or all these types of things, right? You know, some of those integrations, they may happen to integrate with your mesh, they may integrate with 15 meshes and have a huge maintenance overhead so they can't actually work on features. You know, now there's just one API that's common for ingress, for mesh to integrate across the board. So I think we'll see a lot more integrations which will allow kind of what I mentioned earlier, like actually expanding, take the service mesh and get the building blocks actually used for something useful. So we'll see a lot of product offerings and we're already starting to see that on top of the Gateway API. I hate standards. Well, no, I wouldn't say I hate them, but I personally am not a believer in standards for their own sake and I think for LinkerD especially, we've taken a very focused decision which is we are going to do what's right for our users, kind of regardless of any, and everything, not regardless of anything else, but everything else has got to be secondary to that. And yet, despite that, you know, we did adopt the Gateway API in our most recent release for one very important reason, which is, well, A, it was really well designed, so kudos to everyone involved, and B, it solved a really important problem for us which is that we needed mechanisms for LinkerD users to configure some of LinkerD's behavior, especially some of the newer stuff around route-based policies where you need ways of describing classes of HTTP traffic, and that's not an easy thing to do. And the Gateway API had done that in a really elegant way, in a really powerful way, and in a way that even though it wasn't designed for the service mesh kind of from the outset, it really worked for our use case. So we had to adopt it in kind of like a slightly funky way, which I hope to correct in the future, but it really solved an immediate problem for LinkerD, and that's why as soon as Keith and all the gamma stuff kicked off, I was like, you know, even though I hate standards, or you know, don't care about them, or you know, maybe at best I'm wary of them, because I've seen them webinarized, and I wrote a long blog post complaining about this. You know, we're getting involved in this because I think it's really good for LinkerD. So, you know, I guess I'm saying kudos, and like keep up the good work, and if you mess up, then you know, we're gone, we're out. Cool, yeah, I think I'm gonna end up plus oneing a lot of what people have said. So yeah, it's exciting that hopefully users of service mesh will be able to focus less on the UX because that will be consistent and documented kind of across multiple meshes and stuff, and care more about their more advanced use cases and stuff. And I think, I'm especially excited to see how the APIs will evolve for kind of east-west traffic and also for multi-cluster and multi-platform, so yeah. Yeah, no, I think that's a really good point too, is that, you know, I think there is a nice world in the future where we can think about configuring our ingress traffic with the same, you know, with the same kind of core building blocks as our mesh traffic and configure the policies in the same way, and that wasn't something that I had thought was really gonna be a possibility, but it seems like it is a possibility, you know, and I think that would be really great for users because at least from my experience, like one of the hardest parts of Kubernetes is just the fact that you have all these configuration objects that all have their own semantics and you have to like figure out how to, you know, make that work, and this is why people complain about like, oh, I'm spending all my life in YAML now, you know, it's not really YAML's fault, it's just like this is the mechanism by which we, you know, you have Kubernetes' power at your disposal, right? And so getting that right and having an API that, you know, can both span the full breadth of use cases that you need, plus it makes sense and like kind of has these logical building blocks really, really, really important, yeah, and yeah, again, I think Gateway API is awesome as an example of that. Nice, so one of the trend that we've heard at the conference this year, and we heard about it at Service MeshCon Europe, is sort of the side carless model for service meshes. I think actually just before this, we heard from Isovalent about Cilium and the way that they're doing side carless. We've also had a few sessions on Istio's new ambient mode that is side carless. So what's the motivation behind these models and is it really still a service mesh without side cars? I'm sure I can take this. I'll let William answer, is it a service mesh? I think he's the expert there, but yeah, so on side carless, I mean, really it's kind of a weird name but you know, we've ran into a lot of issues with running inside cars around operations. It's hard to upgrade, it's hard to scale, it's hard to onboard and off-board dynamically, you need to restart your pods to upgrade. That's one huge trend we see with Istio users is that they really see the fixes and then users don't actually upgrade because it's too painful, right? That's not something that you see as often with say your Ingress Gateway because you can do a rolling restart and kind of gradually deploy it out, it doesn't impact their applications at all. Some other things that cost every pod now has this side car process running which even if it happened to be using almost no resources, you certainly have to request some resources for it which has some overhead. So we often see a lot of if not CPU usage, CPU and memory requests that are just burning through users budgets. I mean, the list goes on, right? There's all sorts of issues we've uncovered with side cars. There's also plenty of benefits but we saw that a lot of users don't want to take on that cost of deploying a side car everywhere and they still want to engage in service mesh features whether that's using MTLS, telemetry or kind of more rich features like L7 traffic routing, load balancing, right? So that's kind of where the origins of moving away from side cars or offering alternatives to side cars I guess would be more accurate description come from. No. No, no, I mean, I don't think it seems right. Like, you know, so this is another thing I've written a long blog post, outlining all my complaints and opinions but side cars have a lot of problems and there's a lot of kind of annoyances that you have to go through to navigate them of which you mentioned a couple. The fact that pods are immutable in Kubernetes if you want to upgrade your side car well, you gotta roll the whole pod and so on. I believe that the side car model is still the best model from the perspective that I am most interested in which is kind of the operational simplicity one because it's the one that is really understandable. It's the one where it's very clear what component maps to what and especially as you start getting into the world of like zero trust security and all that stuff where you're trying to enforce everything at the most granular level, you have a very clear model. It's like, here is the side car. It enforces policy and TLS for this pod. It contains the secret for that pod and everything is very straightforward from there. So, I don't want to discount the utility of non-side car approach at least from the ambient perspective which I believe preserves the security things that I like about the side car model but I think given the simplicity and given the fact that these warts will largely be, maybe it's me being optimistic. I think a lot of these warts will be fixed by Kubernetes at some point. We'll figure out how to do startup ordering and like all that other stuff. I've looked at the options and I continue to be a fan of the side car model and we continue to push that forward with kind of the underlying difference maybe being that we've been spending all this time trying to make our side car as small as fast as possible to reduce the impact of the extra resource usage and so on. I think this is a, I think it's an interesting conversation to have because it's a, there are a set of engineering and security trade-offs that you're making at every step here and I don't, as much as I'd like to say that you're wrong and I'm right because that's what I'd love to say. I don't think that's really the case. I think it's a lot more nuanced than that. The side car API has been coming in the next Kubernetes release, right? For the last three, four years. I think it's going to happen at some point. Maybe not that specific, Kepp. So William, I saw on Twitter that you had a novel approach to a side carless model for Linkerdee. Did you want to walk us through that? Oh, you really want to hear about this? Okay. So, you know, I think the best way to deliver, if you really don't want to see the side car, like I can just hide the side car from you, okay? So like, I can give you a kube cuddle. If you want side carless Linkerdee, I can deliver a kube cuddle. You know, that just like removes Linkerdee from the output. And in fact, we can use EBPF to do this, and then you get the best of both worlds. So we can have kube cuddle, you know, do the string manipulation in kernel land, which is credibly fast and efficient and bypasses all the network ops. And we can harness the true power of EBPF. And then you get the best of both worlds. You have the side car model under the hood, but you never have to see the fact that you have a side car. You missed wazam on my buzzword bingo. Oh, yeah. Okay, I'll take another crash. Did you consider adding S-bombs on there? Get the... Oh, yeah. See, I think we're on to something here. If there's any VCs in the audience, just know, you know, seed startup, pre-seed coming soon. But yeah, thinking about side cars and side carless, I actually have kind of this vision I'd like to see for service mesh, where things are even more transparent. You know, service mesh took off as you know, you know, even more transparent. You know, service mesh took off as being a kind of transparent proxy. You put your side car, you inject your side car, you use the same URL, and then the magic just happens. You got traffic splitting, you get metrics, and it's just, it just works. But you know, at Open Service Mesh, when we were talking to a lot of our customers, what we're actually seeing is that a lot of the, and you know, we often hear the term like ownership, cost, cost of ownership. A lot of our users, our customers are saying, even still, like we just want NTLS, you know, we just want encryption. And we've striven to try to make OSM as simple as possible, but I can't help but feel that there is an easy, an easy button to sell out there. And you know, I've had a lot of conversations here talking about side carless ambient model and it's something that we're taking a look at, we're evaluating. But you know, I would love to be in a reality to where if you want encrypted traffic between your services, you can just spin up a Kubernetes cluster in your cloud provider of choice and it's just there. No, you know, the identity can come from something hosted, the, you know, something in your cluster takes care of all that, and you don't have to see it. You don't have to worry about your application container restarting. And like I said, side carless is attractive to me because of how it aligns with that vision. But you know, there's so many different ways to skin the proverbial cat. And I'm just looking forward to, you know, doing more research and seeing if we can't make that vision a reality someday. Yeah, I think on console we also, we see the potential of these side carless architectures and kind of the issues that people have been running into with side cars. And yeah, so I think we're just kind of looking into it and seeing where it might make sense to incorporate. Okay, I just want to respond to Keith's point because, you know, we also see that, right? We see a lot of people who are coming to Linkertie and they're like, you know, what, you know, I'm like, why are you doing this? Oh, okay, because we want MTLS, you know, we need encryption of traffic and transit. And they're like, they're like apologetic. They're like, you know, I know Linkertie has all this other cool stuff, but like this is all we need. And I'm like, well, if you just care about encryption and transit, like, you know, just, you know, IPsec or like, you know, like you can do this, you don't need a service mesh for any of this. If you dig in, what we find is often, you know, they're using encryption and kind of like this very loose word where like, well, we don't just want encryption, we want like policy, right? And we want policy at like the layer seven level and stuff. And so like once you start unwinding what they actually want, it gets much closer to, oh, this is something that we're gonna have to solve in layer seven one way or another than just like, oh, we can slap IPsec on it and like, hey, everything's encrypted, the regulators are happy or whatever. So, you know, often I find we have to kind of educate, you know, some of the would be adopters, like MTL, if all you really care about is encryption and like that's it, like MTLS is total overkill for that. There's no reason, you know, it's really only once you start caring about workload identity and, you know, policy enforcement, zero trust and blah, blah, blah, that starts making a lot more sense. So you all talked a lot about, you know, the sort of operational burden. How should users be thinking about the total cost of ownership of a service mesh? Not just onboarding, but all through the life cycle of their adoption. How do they compare and contrast? How do they prepare for total cost of ownership? And what do you all do to help mitigate the total cost of ownership for your users? Yeah, so I think when I think about total cost of ownership that includes things like operating your service mesh scale upgrades. How does it fit with how your organization is structured? And so I guess one example of that is that when you install a service mesh, you have a bunch of secrets and certificates that are associated with it. And I guess the default kind of thing for managing those secrets now is is to use Kubernetes secrets. And maybe if you're in like a multicluster setup or something or you're federating those, then you're copying Kubernetes secrets from one cluster to the other. And I guess one way that that console kind of helps with that problem of secrets management is just by having a tight integration with Vault, which is an open source tool for managing your credentials securely. So console can offload to Vault a lot of the operational things with secrets management. So storing those secrets securely, authorizing access to them and also like managing operations like rotation of your control plane certs. So yeah, so that's kind of a way that console tries to reduce that part of the cost of ownership. Then I think another thing we're kind of seeing is kind of the rise of managed service meshes to help manage the cost of operating that control plane. And yeah, I guess what else are we doing on console? Like we've also been working on kind of reducing the complexity of just like a console deployment so that there's less components to upgrade and less times that you need to do an upgrade by making more of the versions compatible with each other. So yeah. Yeah, total cost of ownership is a, I like the term, but I don't feel like it completely encompasses the nuance in the conversation. So if you want to use a service mesh and you want canary deployments, there are lots of ways to do that wrong. Not necessarily wrong, but lots of ways to do that that aren't up to say quote unquote industry standard. I did a talk a couple of months ago walking through kind of the evolution of networking and Kubernetes and one of the points that I made that I think is applicable here is that organizations have to make decisions based on their problems, based on the things they're trying to accomplish, whatever it is they're trying to solve. Now, nowadays, if you want a production ready service in the cloud that looks very much like the way that giant cloud providers, giant companies deploy their software. And that means you've got the complexity of those giant companies. But you might not have the resources for that. And so for a service mesh, this is all a long way to say that service meshes are very powerful pieces of technology. They can do a lot of really cool things. And of course, people are going to point to it and say, yes, I want that because it claims to do it simply. But at the end of the day, you are basically running another layer of networking through your cluster. And at that point, you are as an organization accepting, unless your mesh is managed, you're accepting some responsibility for understanding what's happening, for responding to incidents. Again, unless you get managed support from somebody, you don't get to just call someone and say, hey, something's not working. It requires relatively deep understanding to be able to debug into operationalized. So I say all that to say that the total cost of ownership of a service mesh, if you're writing it yourself, is probably more than you think. If you are fortunate enough to be at a company where you've got a dedicated platform team, then excellent, that's great growth opportunity to learn what's going on, to be involved in open source, which I'm always an advocate of, having come from a backend, from an end user company. But I would just say kind of beware. And as much as all of us and our respective projects can try to simplify things, there is an inherent cost to running something and getting some of these features. And software sucks. Yeah, in a lot of ways, I think software sucks. So you love it, you hate it. Yeah, I mean, obviously from our perspective as developers of service meshes, we try and chip away at the friction points that users hit, small issues to big issues like Kubernetes cap, Gateway API, Ambient, those types of things. But there's also the burden on the user. There's never going to be a service mesh, I would imagine, that you just press a button and you never have to think about anything at all about how a service mesh runs or what it does, right? You have to have some level of knowledge of what's going on in the system, just like you do with Kubernetes or even Linux or whatever you look at that you're running. So my recommendation would be to digest this slowly, right? There's no need to go use all 14 Istio custom resources on your first day, right? Most users are adopting a service mesh for some particular reason, maybe they want Ingress. Start with Ingress, add sidecars later as you need them, or maybe they want MTLS. Just do MTLS first, worry about canary deployments later once you've got experience and time to learn and onboard your team and that sort of thing. Yeah, I'm all on the managed service mesh train. I think that the number one cheapest way to actually operate a service mesh in production is to use a managed solution. Not coincidentally, I also sell a managed LinkerD solution which I highly recommend that you adopt immediately and come see me after this talk and I'll tell you, it's extremely expensive, but it's awesome. It's funny because we pick on Istio all day and we talk about, hey, LinkerD's faster and LinkerD's lighter and latency's lower and I like doing that just because I'm a mean person and also because it's true. But at the end of the day, what we find is that that stuff doesn't really matter to the people who are adopting LinkerD. Really, an extra millisecond here, there. Like their database takes like 500 milliseconds. Their application takes like 700 milliseconds. They've got much bigger fish to fry than like these tiny, tiny little millisecond here and there and the resource consumption, the same thing. Like it's much easier for them to throw more memory at the problem than to pay the really expensive cost, which is the human beings that they have to pay to operate this thing. So we find again and again that like the expense of the real cost driver, number one, like biggest factor in here is the humans that you have to dedicate to that. And of course we try and make LinkerD as simple as possible and as simplicity for me is this like really long conversation about simple versus easy and rich hickey and like all this stuff. And a lot of how LinkerD has been designed is with that philosophy kind of consciously and mine is also why we're adopting Gateway API. Like those CRDs are gonna be on your cluster anyways. Okay, well now you don't have to learn, we don't have to introduce anything new. So like you're kind of like sticking to Kubernetes as surface area as much as possible. But at the end of the day, like human beings are like the expensive and horrible parts of this. So reducing the amount of humans involved is in my opinion best way to reduce your TCO for a service mesh. Put everyone in this room out of a job as rapidly as possible, that's my advice. I've lost track. So each of you touched on in terms of total cost of ownership sort of power and simplicity of service mesh across the board regardless of which product you're talking about. How do you go about balancing those things for your users, giving them something that is powerful but simple enough that they maybe can't shoot themselves in the foot with it, hopefully? Yeah, sure. Yeah, I mean, in the beginning of the studio. Start with the worst offender. Go ahead, John. We're ready, we're listening. If this was three years ago, I would say it's all shifted to power, right? We just, you know, Easter originally had many, many features. We've actually, in many ways, actually reduced the number of features over the past few years rather than added to them, right? You know, as we found out what was important to users, what belonged in the core project versus integrations, that sort of thing, trying to simplify the offering. But you know, today and moving forward, I think it's really about layering. So, you know, once you have a proxy, there is a tendency to want to put everything that you could possibly ever want to do there, right? It's like, oh, why not? I just compile my entire application of Wasm and put it in the proxy, right? As an extreme. But you know, we have to, you know, draw boundaries. Like, do we want to be a full-fledged API gateway? Probably not, but someone out there does and they should do that and then we should integrate with them, right? You know, we've done similar things with ambient mesh where we're splitting out like the L4 encryption layer from the L7 processing, those sort of things. So it's really not about like restricting, you know, features and that sort of thing to provide simplicity, but having different layers that users can choose, like at what layer they need to solve their problem at so that they don't need to, they just, everything at once and, you know, one product. Big plus one on the layering. That's one of the reasons I love open source so much. And it's awesome to see a conference like this with people working often in the open to provide accessibility points so that software can layer on top of software. When you are talking about balancing power and simplicity for those of us who like to use Envoy as a proxy, it's pretty darn complicated. It's pretty difficult to go and program, why meshes do that for you, at least Envoy based meshes do that for you. And so I think that's a model to think about moving forward. If meshes, Kubernetes provides one level, one layer of functionality, then maybe Envoy provides another, but then mesh is an interface for that. And by doing this work out in the open, by creating extensible, flexible APIs, I think we pave the way forward for people to come after us into more layers on top of that. Gateway API, Gamma is one area where I think that happens. It was often to be part of SMI as well and see things like Argo rollouts and Flagger build on top of that to provide progressive delivery. I think I saw Flagger announced Gateway API support moving in that direction, that's awesome. I think we end up creating a better ecosystem for everybody when we can provide those accessibility points, provide those layers and help users strike their own balance for power versus simplicity. Because there's no one answer. You're gonna have your power users, you're gonna have your easy button users. And the challenge of software is to try to provide a product or a project with enough knobs to where it works for both. Yeah, I think, yeah, when I was thinking about power and simplicity, I feel like, yeah, across many meshes, we're seeing this kind of shift and focus on making it easier to use. Like a lot of them have really powerful features at this point. And so I think a lot of it will just be like continuing to look back at the models that we've built and rethink them and iterate on them. And so, yeah, like, I guess an example of this in console is that we had kind of a model of federations, so connecting services across clusters that existed for a really long time. And we kind of rethought about it from the ground up. And the reasons for that were because the old model of federation wasn't really suited towards how organizations kind of managed their infrastructure today. So we see a lot more of teams managing their own infrastructure. So like, team A is gonna own a couple of Cates clusters and that's where they're deploying stuff and team B might be deploying stuff on another set of clusters. So, yeah, so the kind of original model didn't fit so well with that. And so we rethought it, kind of made it so that, you know, things could be more decoupled. And yeah, so I think just kind of rethinking models as we kind of mature over time will kind of help bring that simplicity back. And then the other thing I think is still just like, yeah, managed service meshes. I think, you know, taking the cost of operating the control plane out of the user's hands is just, it does make it a lot easier to still be able to have the power of all these service meshes. But without needing to operate it. So the question was, how do you balance features and simplicity, yeah, so I think it's probably one of the core questions of any kind of software, not just the service mesh, you know, or like you don't want to end up with, like the worst is you end up with like a leaky abstraction where, you know, you're describing this beautiful model but if you use it in this one way, you actually, you know, have all these other unintended consequences, like performance is way worse than if you use it in this other way. You know, and I think for LinkerD, you know, we think about this every time we add features and this was part of why we were, you know, really reluctant until Gateway API came up to add things like path-based routing was, you know, I just envisioned the poor like Kubernetes SRE who's like, you know, just been tasked with adopting Kubernetes and now it's trying to learn a service mesh and like they've just absorbed like 200 CRDs and you know, this whole model of, you know, you put the code goes in the, you know, goes in the container which becomes an image which goes in the pod which goes behind the replica set which goes behind the deployment which goes behind the ingress and, you know, what I like about Kubernetes is like that stuff makes sense like they're all, they're logical and the pieces fit together but there's a lot, there's a lot there to absorb and you think of that poor person who's just, you know, had to come up to speed with all that and then you're introducing a service mesh and now there's like 70 more CRDs so a lot of our focus has been, how do we make this, you know, feel and smell as much like Kubernetes as possible, so when you're learning it, you can at least kind of relate it to all these other things that you've just absorbed and you know, there's like so many nuances in how you do that but I think for us that's been kind of the principle is make this, you know, stick as tightly as possible to the surface area of Kubernetes, make it feel and smell like the rest of Kubernetes and of course, you know, as soon as you get outside of Kubernetes and like things start breaking down, right? And then, you know, like you don't really necessarily have that to rely on so it only takes you so far but at least within the context of the cluster we can give it to you in like familiar terms so that's kind of like the configuration side and then there's also the operational side which is like, okay, now I'm running 10,000, you know, Linkardee Proxies or 10,000 Envoy's, you know, what is that like? How much tuning am I doing and how much, you know, how much do I have to become an Envoy expert to do this versus an Istio expert and how much do I have to become a Linkardee proxy expert versus a Linkardee expert and that, you know, I don't know that there's a good solution there, you know, I think that's just like, how do you write software that's really predictable to do and to run and when you run it, you know, you can build a mental model of what it's gonna do and if it violates that model, then you can be like, oh, that's weird as opposed to like, oh, it's just a magical thing and like, I don't really know what's going on here because that's when you get into like the dark territory of like, I don't know, we'll just try rebooting it and see if it works any better, right? And like, yes, that's an approach but it's not like the good approach. So it's all to say, I don't know, man, this is really hard, the really hard problem is one that we struggle with and I think that one that anyone who is trying to develop software meant to be consumed by other human beings has to struggle with on a regular basis. That makes sense. Well, we're out of time for today, so let's hear a round of applause for our panelists.