 Cosmotron is a new way and a new paradigm on how to deploy and manage Kubernetes control planes. Hi, this is your host, Saptal Bhartiya, and we are here at CubeCon in Chicago. And we have with us once again, Sean O'Mara, Field CTO of Merantis. Sean, it's good to have you back on the show. Always good to be here. Have to chat to you. Last time, we talked almost a year ago. So give us an update what Merantis has been up to. So Merantis has had a very busy year. A lot of great new things coming in the open source space. A lot of doubling down on our plan to move to the future of Kubernetes and a big focus on our lens and lens platform capabilities as we're moving forward. So it's been a super busy year for us. We are here at CubeCon. Any announcement that you folks made? Anything new from Merantis? As I said, some very cool things in the open source space. So building on the success of our K-Zero's Kubernetes distribution, we've just released Cosmotron. Cosmotron is going into full support as of this week. Cosmotron is a new way and a new paradigm on how to deploy and manage Kubernetes control planes. When I was looking at the way you spell Cosmotron is K and there's a zero and there is S. So explain what this means. It's just a cool name, but it's built upon our K-Zero's offering, which is the zero friction Kubernetes. So we're extending that name. And what we're looking at is Motron moving fast. And how do we make people who want to build Kubernetes clusters at speed in multiple environments really be able to do so simply and very, very quickly. But what led it to creation of Cosmotron? What would the drive force? It's a great question. You know, there are three key factors that drive the creation of Cosmotron. The first of those is in the traditional Kubernetes world today, when we're building Kubernetes clusters, we tend to have a control plane of three to five nodes. And then we add hundreds of workers to those. And that's what we see in a lot of our customer base today. One of the major challenges with that, of course, is we have this gigantic blast radius and this complexity that you have to deal with, where you've got to carve up those nodes, prevent noisy neighbors to prevent the challenges around managing of tagging and, you know, just in general making sure your clusters are stable. The second one being that resource overhead. If I want to create a lot of clusters, I have to keep creating those three control planes. Those are resource intensive. And I think the third and the really important reason is, it's complicated building Kubernetes control planes. It's still complicated. How can we simplify that experience and make it happen in real time? So how can I get a Kubernetes control plane in seconds rather than minutes to hours? So that was the key thinking behind Cosmetron. Caught to that, what we've done, and I think that's really what's important here, what we've done is we've taken the Kubernetes control plane and we've containerized it. We no longer need those three nodes. We have what we call a mothership cluster running Kubernetes. And I can spin up a brand new Kubernetes control plane with three lines of YAML in seven seconds. So what that means for me as a operator of Kubernetes clusters is all of a sudden I can now spin up a new cluster on demand in real time in a time that a developer is not waiting for something or a CI system is not going to have to wait a long time. Using the connectivity service, we can connect remote workers to these clusters as long as we've got access to the endpoints, the API endpoints. And I can put those workers anywhere. And why anywhere? Anywhere that's got connectivity back to that connectivity with a K service, which means I can run my cluster on top of my mothership on top of AWS, Azure or on-prem. I can attach my kubelet-based workers from AWS, Azure, on-prem or pretty much anywhere I can run a machine. And suddenly I have distributed clusters in the multi-cloud. Which really changes the paradigm. With CAPI added into the mix, so the cluster API, I'm able to now automate the deployment of a cluster in real time. And the longest thing I'm waiting for is the instances to start on my chosen cloud provider. Changes the way we think about it. Now I can add cluster deployment into my CI CD in real time. Changes the way I do testing. Changes the way I build clusters. And I can very simply build a cluster for an app rather than have multiple apps on a cluster. Because I don't have the overhead. That's it in a nutshell. A lot of time when you build these things, you already have users. You're already working with them and you are solving their problem. So you may not be able to name, but just give us an idea. What are the ideal use cases for that? That's a fantastic question. So I want to talk about KZeros for a second to help answer that question. KZeros was introduced as a next generation way of deploying Kubernetes clusters. The thing about KZeros was how do we make it simple? It's the core message behind KZeros, zero friction. So the simplicity of KZeros has been carried through into Cosmetron. So we've got about 200,000 clusters today providing telemetry from KZeros clusters across the globe. The core use cases we're seeing customers, the customers we're talking to today, tend to be platform teams who are running Kubernetes clusters on-prem, who are running into the challenge of going, we have a mandate to consume public cloud, but we don't want to be locked into a public cloud provider. A lot of the customers I'm talking to tend to be in the financial services space. And their argument is we still want control of our control plane. We want to own our control plane, but we want to use public cloud providers and we want to use more than one public cloud provider. And so what we're seeing is primarily platform teams are coming along and saying, this gives us a way to manage our Kubernetes in all sorts of different environments and have that control of the cluster, but still take advantage of public cloud. I want to talk about two things. When we look at the Kubernetes or cloud radio, one is complexity and second is cost. Then look at either KZ rows or Cosmetron, what impact it has on these two. So the first one I think we've been talking about the complexity of our Kubernetes for a long time and ultimately Kubernetes is becoming very, very commoditized. I can spin up a new cluster at AWS and I'm only paying for my workers, but I'm locked into the AWS model of doing that and I'm locked into AWS. So that speaks to cost. I have to use AWS for everything in those clusters or I have to use Azure. By simplifying the process of deploying a cluster, by removing the worker from the control plane, not only do I get better security, but I also get a cheaper way to run my control plane. I no longer need three nodes if I want my own private control plane. I can spin up a lot more clusters on the same resources that that reduces my cost. But the other side of cost, of course, is manpower. It's the platform and platform management teams that I need to deal with all the complexity of QBADM deployments and dealing with Terraform complexity. By combining Cosmetron, KZ rows and CAPI, I set all that up once. I have a definition file that's a few lines of YAML. I mean, quite literally I can do it in three lines or ten lines depending on how much I want to modify and I can spin up a cluster on demand. Again, less overhead, less cognitive pressure on my platform team. My platform team can focus on building applications for me. Things that you folks too are open source. Talk about the open source aspect of Cosmetron. Number one. Number two is that open source can go only so far. You know, that's why commercialization of open source. And you folks champion that as well from the OpenStack days that you do need commercial support behind that. It's not worse as it's end. No, I think it's a fantastic point of view. I mean, for us, we love open source. We've built a business and a very successful business based on open source and part of that is contributing back to the open source community. For us to be able to afford to do that, we need people to consume the open source and pay for that support. But the value that they get by paying for that support is simple things. Like we have a team in the background that guarantees that within three days we'll fix any CVEs found. It guarantees that if you run into a problem with the platform deployment, you've got someone to hold your hand through that process. We believe in strongly in open source and we continue to believe in open source. But we also, as I've said, believe that by providing support options for our customers, they are also able to give back through us to that open source community. The other side of open source, though, is by opening it to a community of people, we're able to see more use cases, more people providing to the quality of what we're going to offer and more eyes on the problem. And ultimately, our goal is to standardize, standardize, standardize and open source in many ways allows us to do that standardization. One last question before we wrap this up is that you folks keep coming with new technology for every new KubeCon. Of course, you cannot share a lot of details, but what should we expect next from Mirantis? We've got some very cool stuff coming down the pipe. I don't want to shade giveaway too much right now, but we're looking at different ways of being able to define Kubernetes clusters, all the dependencies of a Kubernetes cluster. That'll be coming in the near future. We really are focusing on that simplicity of experience and then in the Lens space, some really cool things coming from the Lens team, which is the same team that does the Cosmetron and KZero's work. So watch the space, we'll share things. I don't want to put out too much out there now, but exciting things coming. Sean, thank you so much for taking time out today and give us an update on Mirantis. Thanks for having me. And as usual, I would love to chat with you again. Thank you. Thank you very much. Always good to see you.