 Hi, this is your Satin Bhartiya and welcome to our newsroom. And today we have with us Louis Ryan, CTO of Solo. It's great to have you on the show. Nice to be here. Thanks for having me. Yeah, and today we are going to talk about the graduation of STU project from CNCF. Before we talk about the graduation, what it means, it's a very good idea to talk about where is the service mesh that, you know, where today and the rule is to explain because, you know, in the last two or three years, a lot of things changed with the Google putting their own code base. We were, like, kind of talking about that. So let's talk about, just get a kind of state, you know, where we are today. STU, the project has just been chugging along, continuing to build, you know, features and value for the users. It's been very stable for a long time. A lot of people are using it in production. So from, you know, the project's health and status point of view, things are moving along very nicely, which is actually why when we applied to the CNCF and went through the graduation process, it went very quickly. Yeah, you know, if you think about what the requirements are for graduation, you know, it's a lot about, do you have a solid user base, a solid contributor base, you know, have you got good development practices in place and all of those things have been going on in Istio for many years. It touches so many different areas. We can talk about observatories, security, but, you know, it plays, you know, in different domains. So that's what I want to understand. The evolution, where you see service mesh today, how mature it is and what role it's playing when we look at the whole either cloud native landscape or Kubernetes landscape. Yeah, I mean, service meshes are kind of generic in networking infrastructure, right? You know, no matter what kind of application you built or where that application run, a service mesh is designed to give you, you know, a kind of elevated abstraction of the network, right? That works with applications to, you know, do the things that application developers need to do, right? Whether it's monitor the traffic into and out of my service, whether it's, you know, to enable security and authorization of calls in and out, or, you know, to control traffic so I can do delivery and deployments, right? This is what service mesh is to do and we see them as extensions of a network, right? A service mesh is just a kind of higher level abstraction of existing networking infrastructure. And so, yeah, I mean, that resonates with a lot of people because what most people run on their networks is applications. And so, you know, for a long time, applications weren't getting a lot of value out of networking infrastructure. Networking infrastructure, you know, yes, enabled N10 connectivity, but, you know, that's largely mostly a solved problem today. And so, right, raising that abstraction up to give people more value is what we're about. What does graduation mean? Not only for the project itself, of course, for vendors like Solo, but also for the larger ecosystem, which can also include users, other vendors. Talk about what graduation actually means. Yeah, I mean, so, I mean, graduation is effectively a mark of maturity, right? Just like graduating from college, right? It tells the outside world that you've developed a certain amount of mastery and skill on the subject. You know, so that CNCF establishes a bunch of bars around, you know, maturity, you know, process, how you engage with the user community. Do you have good security practices? Like are you fixing your CDEs? Things like that. So graduation really is a mark of maturity that you can show to the outside world that, you know, you're ready for production, right, as an open source project. And, you know, so maybe this information is not necessarily new to the average practitioner who's been working in the space for a while, but if you look at when people need to bring things into production, they have to go talk to executive stakeholders and others in the organization and having that mark, right? Just gives the organization more confidence that they're making a decision, right? That's based on, you know, well-founded industry practice. And so that's why graduation is important, right? And if you look in the CNCF at the graduated projects, they're all very stable, you know, high-quality projects that have been around for a while. And so it's reasonable to say that you can actually use them in, you know, building your business. Can you talk a bit about the adoption, you know, if you can share some uses, use cases, who's using it today? Yeah, so there's a very large list of the big industries and enterprises. It's really hard to off the top of my head just share like a huge number, you know, certainly we at Solo have a very big user list of very recognizable names. And, you know, so there's, you know, if you go to any big enterprise today, you're very likely to find this deal there in production already in some form or other, whether it's through a vendor or just directly with open source. Probably the more interesting thing is just who's getting involved in the project, right? Since donating to the CNCF, you know, one of the bigger developments has been, you know, Microsoft has actually started to contribute to the project and stepped in. And just, you know, anything that pulls a bigger community together is going to be better not just for the project, but also for the end users, right? Because it's more of the industry getting behind the project. And so that was pretty exciting to see. And certainly they've been a great contributor since they joined. So yeah, we've seen a lot of value just by getting into the CNCF in the project. And I think that will benefit users in the long run. When we go to, like, of course, events like KubeCon and a lot of others, and we do meet user community, this KubeCon last one was like really big. When you go to these events, or of course, you know, when you interact with your clients or user, what are some of the pain points that you see that are still there when it's come to either service net, all the whole networking, when it comes to once again, cloud net event, Kubernetes, native workloads, or you see that, hey, no, all those problems are solved. That's not a big issue that we talk about. It would be great if all the problems were solved. You know, like most open source solutions, right, they have a main sense cost. Istio certainly has that, right, like the software releases every quarter, you have to go and do upgrades, right, to stay on a release that's going to get CVD support, right, from either the community or from a commercial vendor who's delivering a product based on it, like so, though. And the same is true for Kubernetes. And, you know, anytime you have to upgrade a large or important piece of infrastructure, right, that's toil for, you know, the platform teams within these organizations. And that's why you've seen a lot of focus on these subjects over the years, in Istio and Kubernetes and many of the other big open source projects. I think that's what people care a lot about. And, you know, whether it's delivered as software or delivered as a service, right, that is important. And you see, you know, a lot of market demand for things being delivered as a service, for instance, but being compatible with open source because there is this operational cost. And so, you know, if I have to name, you know, the biggest problem with any of these projects today, it's the cost of maintenance for the platform team. This complexity is not going to go away. It is only going to grow even more and more. But we have to kind of help, you know, users by lowering the barrier. We need to be able to deal with this complexity to talk about how is solo or other players in this space are helping users so that the thing is, we get overwhelmed so much with this technology that you forget that most of these companies, they should just focus on building the business application that is their core business versus getting overwhelmed with all this complexity and, you know, plumbing. Certainly in the kind of cost of maintenance world, our primary focus to just make sure that, you know, if you have to go through an upgrade or an installation cycle, that it's not surprising, right, it's boring, you know, things like, you know, I run an upgrade tool and when the upgrade completes or, you know, partially progresses, you know, I can just watch it. I don't have to do or, you know, be constantly attending to it. And the second part is, you know, minimizing API change, right, getting to stable APIs and having those APIs be stable for years really helps, right, because it just reduces the amount of effort and churn and how far throughout the organization that churn has to propagate. And APIs propagate to the very edges, right, so that's critically important. You know, certainly within Istio, you know, we solo have been driving this effort called Ambient Mesh, which changes a big part of the installation and upgrading profile of the project and we think is a major improvement in the operational cost for Istio. And so we've been actively driving that project for about the last 18 months. You know, as part of Ambient Mesh, we don't have sidecars anymore. And so we don't have to maintain sidecars, right, there's just, I install it in the cluster and when I complete the installation, certain features of Istio are already just on. Right, and that's a big difference to where things were with service mesh and having to do injection all the time. It's just a lot of focus in reducing that operational toll. What are the other things that you folks now graduated, you folks are working on, either from purely Istio's perspective or from Solo's perspective. What's the next? I'm sure Ambient Mesh is the biggest thing overall, right. It's not just designed to help with installation and maintenance solo, that is a big part of what it does. When you install it, right, I talked about features being just on in the cluster, that includes MTLS, right. And so being able to go from a situation where I have a Kubernetes cluster and I need to meet a compliance goal or security goal to having MTLS on for all the traffic in the cluster in a single installation step is a really big deal. Right, you know, all the interaction we've had with users, they're kind of two different groups of users of service mesh or two entry points and to start to use it. One is the kind of security and compliance buyer. And the other one is, you know, people looking to do operational improvements, blue-green or get telemetry and get better insights into what's going on in the cluster. And so by being able to just give you MTLS in a single installation step, we deliver a lot of value to pretty broad class of users. And so that's, you know, a big building block. And then extending on that value incrementally to deliver the other value, right, really changes how people think about service mesh and making it more just part of the network. Right, this is why it's called ambient mesh. We just want it to be there, right. It's there when you need it and when it's not needed, you don't have to think about it. Louis, thank you so much for taking time out today. And of course, give us an update on the whole market. Thanks for all those insights. And I would love to chat with you again. Thank you. All right, thank you so much. Great to be here.