 Hi everyone, this is the state of the service mesh panel. My name is Linsang, I'm the director of Open Source with Solo.io. I am super excited to moderate the panel for you. We have an excellent line of panelists. Edith, can you introduce yourself? Sure. So my name is Edith Levine and I'm the founder and CEO of Solo. At Solo we are trying to make service mesh easier to adopt and operate, focusing mainly on SDO right now. Nick? So I'm Nick Jackson. I work as a developer advocate at HashiCorp and I'm also writing a book on service mesh patterns with O'Reilly. Marco? Hi everybody, my name is Marco. I'm the co-founder and city of Kong. Kong is the creator of Kong Mesh and one of the maintainers for Cuma. Louie? Hi, I'm Louie Ryan. I work at Google and I spend a lot of time working on SDO and Google's SDO related products. William? Hi, I'm William Morgan, CEO of Boyan. I'm also one of the creators of Linkerdee. Excellent, a lot of service mesh knowledge here. So I would like to ask our panelists, how should a user decide if they need service mesh? I start. Go ahead, Edith. Yeah, so like everything in life, this is a trade-off, right? I mean, there is a lot of advantage that service mesh is bringing you to the table. Specifically, I think all of the service mesh is focusing on observability, security, as well as routing, traffic and policy. So that's bringing a lot of benefit, though, if what you have is one application with two microservices, maybe it's an overkill. But I think the trade-off is where people need to play. So, you know, it's the volume of microservices that you have and amount of application that you have, team that's working with that versus the complexity of operating one. Marco, anything to add? I guess perhaps another way to frame this question would be to ask ourselves, when should our application team stop building and reinventing the wheel when it comes to service connectivity, every time they create a new service or every time they create a new application? Service mesh has always been positioned as yet another thing that we have to do and build and implement and deploy. But perhaps it is an opportunity for us to stop doing the hundreds of things that the application teams are doing every time when they want to make a request over the network or receive a request over the network. The things that service mesh solves are not things that we don't need without a service mesh. We still need them, the security, the observability. The difference is how do we implement it? And service mesh allows us to do it from the infrastructure. Therefore, frame a very precious time from the application teams. I'll just chime in and say, from, you know, we help a lot of organizations adopt LinkerD and typically, you know, there's the value prop of the service mesh and then we'll say, are you set up for this? And the are you set up for this component is, you know, are you already operating basically in kind of a cloud native way? Like do you have a platform team that owns the underlying platform that can take on the service mesh as part of their responsibility? If you're not, if you don't have that, then like, you know, all the technology in the world is not going to help you. Are you operating in a world where the developers, you know, are able to own their services, you know, and build on top of the platform without having to understand every detail of the platform? If you don't have that, then like the technology is not going to help you. So there are some organizational prerequisites, which is what we typically look for first before even having the conversation. Like, you know, are we going to improve your observability, you know, or not? Yeah, these are really insights. Nick, Anoui. I mean, I much agree with Marco. I think it's a trade off between platform, running a platform and actually having to write code. But I do think that the problem of writing the code isn't necessarily going to go away. But the problem around managing the platform will get easier as time goes on. And sort of the various vendors come to the market, which provide you with a managed service mesh. Louis, anything to add? Yeah, I think, you know, talking about platform and platform management, I think William and Nick and Marco covered that pretty well. There's also the top down constraint, right? If you're in a regulated industry, right, and you have to do zero trust networking like things, you know, your options are moderately limited, right? And they range from the open source to the by watering the expensive commercial solution. So, yeah, I mean, those are the things that often drive these decisions outside the kind of core, like, are you ready to engage with the value problem of mesh? Is it meaningful to you, yes? Or do you need to kind of get to a different level of maturity? Yeah, absolutely. The next question, you know, I would like to know the answer to you from your perspective, is service mesh actually getting easier for enterprise to adopt? William, do you want to start off that one? Getting easier, I guess, you know, it's, it's not getting harder. I think, you know, for enterprises, especially there's like, you know, kind of a new swath of vendors. And so like some of the landscape becomes more complicated. And now your, your feature matrix, you know, has like 100 rows in it, whereas last year it might have had 50 rows. So I think in some ways it gets a lot, it's getting more complicated. But, you know, the tooling is being built up. So yeah, I think it's a little bit of a mix, honestly. Yeah, Nick, anything to share? No, I pretty much would echo what William just said. Yeah, cool. Edith? Yeah, so I think that it's definitely getting easier, but I think we're just scurfing the surface of how can we make it even easier. And I think right now, starting to get to the point that service mesh is becoming getting to a maturity that is actually can, can be adopted easier, let's say, but as I said, I think that there is way more stuff that we can do in order to make it, you know, build the tools easier, the user experience easier, and you know, I feed it better to the organization who's going to run it, and etc. So I think that as I said, it is getting easier, it's still hard. So hopefully we will work on it as a community to make it simpler. Yeah, Michael, anything to add? Ed, I'm a big believer of simplicity. I think simplicity is a feature. Good documentation is a feature. You know, easier to adopt. Is it easier for the platform teams to deploy it to the application teams? Is it easier for the application teams to write their software knowing that there is a service mesh? So I think there is two angles to this question, and service mesh, it's certainly getting easier to use, easier to deploy. I guess that perhaps the industry can do better to educate the application teams on how to operate with your assumption that there is a service mesh running in the underlying infrastructure. Louis, anything you want to add? So I, you know, I think, you know, the service mesh products that are out there, you know, have gotten collectively better kind of day zero and day one, right? So what you see now in enterprise is, you know, that day two operation stuff already starting to dominate conversations, at least turning the ones that I have. You know, if you ship four releases a year and the company has, you know, the manpower to absorb one update a year, you know, how are they supposed to engage with the product? You know, what are their costs to perform upgrade? You get lots of, you're seeing growth in the number of managed service mesh offerings, right, not just installed offerings in response to that, I think. So the, you know, the conversation has shifted as, you know, the early adopters are now into a little bit later in their maturity cycle and are dealing with those day two issues. And those are becoming known to, you know, the buying side of the market. And so they're asking the same questions in RFCs and RFCs. So I think that's, you know, there's still plenty of room for growth, I think, to make it easier for enterprises to adopt and maintain, because they're not, they won't be willing to engage if they don't feel like they can maintain long-term. Yeah, totally. But it's exciting, you know, we're making at least getting them on board easier. What is the current state of service mesh, Nick, anything you want to share? I think it's really good. I mean, I think the, so Louis and Marco touched on this, which is around knowledge. So the fact that the practitioner can now more easily find information on how do I do X with product Y, that makes a massive advancement in successful use of the tool and the adoption of the tool. And I think as time goes forward and more and more people are creating tutorials and videos and blog posts and things like that, and that kind of community contribution of knowledge really, really helps adoption of service mesh. I think it can always get better. And I think it will. But I think right now it's, you know, it's really good. It's definitely not something that you question, whether it's a production-ready technology anymore. That's awesome. Marco, anything to add? I agree with Nick. I think service mesh is one of those things that five years from now, looking back, will fill inevitable. I mean, we are distributing our applications. We are decoupling them so we can deploy them faster, you know, in a highly available way. And the more services we created, the more applications we created, the more connections we create among all of these moving parts. And it is impossible to think that any organization can be successful with this transformation without having something in place at the infrastructural level that takes care of all of these connections so that we don't have to worry about them anymore. And without the service mesh, I really cannot see how that could be possibly successful. And lots of enterprise organizations right now and practitioners are seeing that. And so as the adoption of service mesh increases, which is increasing, service mesh products will get more mature and we're going to hear more and more success stories from them on how they enable these transformations with service mesh. So it's very interesting times ahead of us. Yeah, it is. Anything to add? Yeah. So I think that when we're looking at the current state, I think that if you're looking at the roadmap, it's most of those measures. Basically today everybody's talking about make it boring, right? I mean, it's done, the feature are there. And now it's relatively boring. So I think this is a very interesting time because that show a huge maturity in the market. And definitely there's a market fit, right? I mean, that's why we have this conference. This is why we are here talking about it. Obviously, service mesh has a market fit. I think that the interesting thing that will come after it, which I'm personally extremely excited about it is how we can even push the boundary. Okay, so now we have this great platform that we all agree that should be there. And I think everybody will agree in the organization, as Marco said, in like a fibers thing, it's just going to become part of this platform. I think that what's interesting is what can you do with this right now? You have a platform, how can you extend it? How can you make it more interesting and customized to your own use case? So I think that will be right now probably what we as an ecosystem, all of us is going to have to do, right? Try to push the boundaries and which is pretty exciting. Yeah, totally. Louis or William? So I think there's a couple of things, along with the boring that it just referred to, it is also kind of the platformization, right? That service meshes will become a little bit less about the features that they ship and more about how easy it is to enable that last mile of integration that customers need. That's a transformation that takes time as it will take as long as it took to build service meshes, I think, actually. So that's one trend. The other trend I think we see is the platforms on which people are deploying service meshes are starting to now incorporate service mesh features into those platforms themselves, right? So there's the bottom-up market validation. You see some of that even in Kubernetes, where Kubernetes has multi-cluster services now. That value proposition is starting to sink down into the infrastructure, which just makes it even more boring, which, in my opinion, is just good. Yeah, totally. William? I mean, I can only really speak to the Lincardee perspective, but Lincardee was the first service mesh and the one that introduced the term into the lexicon. And we've been asked this question every year since 2016 or whatever, the ancient days. For me, I don't know. I think Lincardee is in a particularly kind of exciting state. Even at this conference, we have end-users talking about using Lincardee for scheduling COVID-19 tests for their students, or for doing rapid experimentation at these big financial institutions, or for doing chaos engineering, or for adding FIPS 140 compliance. It's just stuff that I never imagined Lincardee would be used for. And so that feels awesome. We're up for CNCS graduation. There's a whole bunch of cool stuff going on. But the thing that we keep coming back to, I think maybe the Louis point is that ultimately, service mesh is going to be absorbed. Maybe this is what become boring means. It's going to just be part of the ecosystem, whether we call it a special name or not. And so the things that are exciting, especially as we think towards the future are what can we build on top of this? Because we know that building a big cloud-native application subject to all the demands we place on software today is a hard thing to do. And service mesh can solve one critical part of that. There's a lot more that has to be done. So to me, that's the kind of most exciting bit is where do we go from here? Where are we building on top of the service mesh? Yeah, excellent. I think this actually meets nicely to our next question. What's next for your service mesh project? William, you want to start that off? We're just shutting it down. We're done. No more service mesh. Now, what's next for us? Actually, it is going to sound pretty boring because the kind of concrete roadmap for Lincardies largely around policy features. So over the next couple of releases, well, I should say the releases leading up to the most recent one, 2.10 have been heavily focused on MTLS and getting identity wired all the way through. And then that's all that has been in service of setting us up to do policy and tackle some of the difficult challenges that we know that people have, especially in multi-tenant environments. So that is kind of like the concrete answer. I think more generally what's next for Lincardies is we want to make it, we have a sense as a project that it has to be this platform on top of which people build things. So we recently introduced this idea of extensions, which are very, very easy ways of plugging into Lincardie. We've already got some interesting extensions built on top of that. And to me, I think that solves kind of Lincardies core vision here, which is we want the service mesh itself to be really small and tightly contained, but we want people to build on top of it and have kind of a modular approach. So that's what's on the roadmap for Lincardie. I'm really excited to see how that evolves over the next six months or so. That's great. Marco, do you want to tell us about Cuma? So Cuma comes out of the work and the efforts that Kong is doing with our enterprise customers. So it's the fruit of that work. And you know, service mesh, it is an important piece of the broader connectivity puzzle of how enterprise architect are going to be providing connectivity to their application teams. And when we built Cuma, we started with that starting point. So we're going to be having teams that are far away in their Kubernetes journey. We're going to be having teams that are not on Kubernetes yet. So how can we provide a connectivity layer that creates an overlay, an abstraction across not only Kubernetes, but anything, including virtual machines that the organization may be running. And we are obviously, that's a very complicated problem, being able to run a service mesh in a multi-zone capacity, upgrading it, making sure that we always know if something goes wrong, where it goes wrong. So the operations of running a multi-zone service mesh across Kubernetes, MVMs, across multiple clouds, the easy upgrade button that allows us to upgrade the service mesh and the data plane proxies. That's certainly something that I'm very excited about. As well, we've been doing lots of work when it comes to putting the, you know, building a foundation for our adaptive routing features that could allow us to improve the high availability of our applications without necessarily having to have human intervention every time. And the more services, the more applications, the more connections, the harder it's going to be to be on top of all of this. And so the infrastructure at the end of the day, service mesh, even it's a mean to an end. And that end is reliability, security. And so we are working towards automating all of that, so we can remove the human factor out of the equation. Yeah, Nick, what about you? I saw you nodding your head. Yeah, I mean, so predominantly, I suppose, operational aspects, making it easier to operate is one of the key things. We're going to continue to, with HashiCorp Cloud into different sort of clouds. So you'll have managed console across more of the cloud vendors in the coming year, the kind of the operational aspect of configuring multicluster capabilities or for connecting sort of a Kubernetes workload of virtual machine. The operational elements of that is going to be simpler, easier to kind of manage the sort of security and the actual elements of configuration. And predominantly, it's around the sort of the Kubernetes story as well and delivering a really great experience to a Kubernetes practitioner. So treating things like the sort of, it should feel native that you're just using an extension of Kubernetes and not a sort of a different product. Those are some of the sort of the goals that we were trying to... Yeah, makes sense. Louis, anything you want to share from my Istio perspective? Well, I guess I'll share two perspectives, right? So with my Istio head on, a lot of what Nick just said, plus the day two operation stuff around upgrades, maintenance, just life cycle management, Istio has lots of features. And so most of our future roadmap is kind of just incremental customer driven stuff, probably with a focus on compliance and security things. And with my Google head on, it's enabling Google's customers to easily adopt and absorb service mesh. We recently launched a fully managed Istio based solution for customers, so they don't manage the control plane aspects of Istio anymore. We do that for them. And yeah, that's aligned with the goals around day two operations as well, right? Just to try and lighten the load for people as much as we can. So yeah, that's aligned with the trend like Nick just talked about, right? HashyCorp is going to provide, I guess, managed console connector, similar things. Yeah, makes sense. Yeah, it is. Anything you want to share from Istio, Google, Mesh? Of course. Yeah, so I mean, we are a little bit different because each of the people who are stalking here is basically very associated to one project. And with Solo, we are working mainly with Istio today. But we started our journey a little bit different. We started basically with Envoy. That was the thing that we based on. And therefore, the first thing that we built was API Getway. The second thing, and basically, we built the building platform, right? The building blocks to create this platform. So basically, what we have is basically, hey, it's an API Getway that built on top of Envoy, and now it's going to shift on top of Istio. The second thing that we have is Google Mesh, which is basically helping to manage and basically focus on the day two operation, as well as very big focus on multicluster and basically managing a lot of instances of Istio, fall over between them and so on. Then the third thing that we will focus on about is extending the mesh with WebAssembly. So this is something that we worked very, very hard and brought WebAssembly hub to the community and to our product. And the last one was developer portal, which is basically the next, in my opinion, makes sense approach to basically manage all of this, kind of bring it to the developer, be able to expose all the API that's running. So this is the building block that we have. And I think that now when we have this platform, right, that it's all kind of working specifically on Istio and Envoy, and we have the knowledge of doing Istio and Envoy, again, we are pushing their boundaries. So the next thing that we're going to do is working on building on top of it. And we're going to have some very crazy and awesome announcement very soon. So yeah, stay tuned. That's excellent. These are all the questions I have. I would love to hear the questions from the audience.