 Hi, this is your whole Sapna Bhartiya, and we are here in the tube called Chicago. Today we have with us once again Madhura Masasky, co-founder and CEO of product at platform 9. Madhura, it's good to have you back on the show after a long time. Sapna, really great to see you again. Yes, after a long time. Since it has been a while since you and I talked. So let's just look at where is platform 9 from the perspective of your journey and the whole cloud-related Kubernetes landscape. Absolutely, yes. So like you said, you know, virtualization has been our roots. We, you know, go back a long ways, even before OpenStack at VMware. And so we've kind of stayed true to our roots. So where platform 9 is at today, we're focused very much on Kubernetes, right? But we help enterprises run, scale and optimize Kubernetes. And we do that in two ways, right? If you're running Kubernetes on premises, on your own data center, co-location hosted data center, we help you bring up that entire stack all the way from bare metal to virtualization layer to then Kubernetes all the way up, right? And especially if you're running AI ML workloads that can become one of the most cost-effective ways of running Kubernetes on premises without any of the Kubernetes operational burden. So that's one very big aspect of our value proposition. The second really big aspect, I mean, it's a very innovative new product that we're coming out with, so I'm very excited to talk about it. The product is called Elastic Machine Pools, and that product focuses on public clouds and specifically starting with EKS, although we plan on expanding to EKS, et cetera, soon. But specifically when running on EKS, costs tend to be one of the major issues, especially if you're running at scale, right? Costs can really skyrocket. And one of the well-known, not well-known facts about Kubernetes is that it's not a very effective consumer of resources. So your Kubernetes clusters tend to be between 15% to 30% optimized. We help improve that utilization and hence shrink your costs down by about 50%. When we do look at cost, there are other things like complexity is one of the main challenges of Kubernetes. Complexity also leads to, of course, developer time resources, which brings the cost. And if you go back to your roots and the lessons that you folks learn from the whole, you know, either you learn or the problem that you solve for that ecosystem, when you look at Kubernetes, what are some of the bigger pain points? Of course cost is there, but there are other pain points also where you see that platform mind can help. Yeah, so you know, two things, and we've traditionally always been focused on that operational complexity, which does result in cost as well, like you said, right? That has been a bread and butter. And we continue to solve that problem, and especially for things like AIML, you know, that complexity for data scientists from their perspective, they don't want to deal with that complexity, right? So that's one really big area we saw. And then cost, especially as you see again in the AIML context, the amount of resources you're going to need to run a, you know, to run your inferencing, your training algorithms, et cetera, is 10x or 100x more, right? So cost becomes an even more of an important consideration if you're running those resources in a public cloud. GPUs are extremely expensive, and there's a short supply. So you want to get the best usage out of the capacity you have. And so cost kind of becomes one of the top-of-the-mind concerns even in that context. And so that's where we're looking to add value in both these areas. If you look at, you know, cloud, cost, you know, management or optimization, a lot of players, you know, are coming out. Yeah. What sets your offering apart from them and some of them are incumbents, you know? So talk about why you are entering the market and what additional value you're bringing. Yeah, no, absolutely. And that's so fitting because even as I walk towards our booth in KubeCon, I pass by about five, six companies. They're all providing the cost optimization value proposition, right? But what I'm so excited about with what we're coming out with is it's a fundamentally different approach towards how we optimize those costs, right? So like I said, my roots, our roots as a team are from virtualization. And so we're bringing the uniqueness of virtualization and private cloud benefits to the public cloud. So the way we solve or reduce costs is we, instead of you creating EC2 instances as worker nodes for Kubernetes, we go straight to AWS bare metal. So that's a very unique thing that we do that nobody else in the industry is doing today. By going to AWS bare metal, we can deploy our own virtualization on top of AWS bare metal. So Amazon does the same thing just behind the scenes. But now that we do this, we're able to over commit capacity across your pool of bare metal servers. And we can live migrate VMs from one server to the other so that your SLA is never compromised. So that's what makes our solution so unique and what we're doing, nobody else is able to do. Let's also talk about some, I mean, of course, when we look at the whole cloud native landscape, we talked about cost, complexity. When you are here at the event, what are the other pain points that you see that, of course, Kubernetes was not designed to be simple. It was not a product designed for customer. It was something that was used internally and then it was exposed to everybody else. So there are some pain points which are known and ecosystem players like Petron, but this year, what you are hearing when you talk to a lot of developers here? Yeah, no, it's interesting. I think definitely there's a focus on AI and it is such a new area in general, but also for Kubernetes. If you think about it, right, in Kubernetes, there isn't today a native way to properly expose GPU resources in a way where you can enable sharing of those GPUs across spatial parameters, across time-slicing. So that's some of the innovation that is coming to Kubernetes through initiatives like DRA. So that I think is very exciting. There's been other cool approaches, discussions that talk about power optimization or saving on power, which makes your data centers private or public more green, also contributes towards costs. So that's another angle and I think that scale is going to be taken to the next level, right? Especially with these newfangled AI mo workloads. And so multi-cluster is another topic that I've seen being talked about. Doing Kubernetes multi-cluster is not a solved problem today. There isn't a management cluster that can farm out workloads to a set of other slave clusters per se. But then there are projects like Q where they just announced yesterday that are going to be there focused for the next release. Because in the context of these large-scale workloads, a single cluster can no longer take all of those hundreds of thousands of jobs, et cetera. So I think all of these are the right directions in which Kubernetes is evolving and I'm very excited about that. Since you brought in Genetic AI, and this is the hottest topic even here, folks are talking about, of course we can talk about Genetic AI workloads, but is platform 9 going to leverage Genetic AI also? So I think that would definitely be great. But for a small company like ours, there is a very big cost aspect towards running Genetic AI unless you leverage existing GPT engine, et cetera, which would be the right thing to do. So we would like to leverage Genetic AI in fact for products like EMP. One of the key components of EMP, which is the cost optimization product, is a component called Rebalancer. It's a patent pending component where we use data about your existing workload patterns to figure out the best ways to optimize costs or to place those workloads on. And that's a place where Genetic AI could play really big value. And so we're definitely taking a close look at that. One thing that I like though when you talk about cost, you're not talking about cost cutting, you're talking about cost optimization. Because the market is heading, of course there was a layoff, teams are getting smaller, COVID happened, everybody had everybody. What's going on? So companies are becoming cost sensitive. Now that cost sensitive is not translating, it should not translate into cutting resources that are critical. So cost optimization actually works, it's more of a positive approach. So if you just not look at the economic, but in general, cloud should be more efficient. So if you can just talk about it from a philosophical view, like cultural view, that cost optimization should not be a pain point. Absolutely, no, I think you're absolutely spot on. In fact, you know, one of the aspects of a product like EMP that really excites us is that green aspect as well. Which is, you know, I think it's a shame that when we operate in a private cloud world, which is where we come from, right? So companies like VMware, they focused on giving you the best use of the limited server capacity that you have, right? So server consolidation is something that VMware made really popular. In public clouds, there is so much infatuation for many years now about scale and availability and global distribution, which are all important, but I think optimizing and getting the best use of the resources you have provision has somehow gotten a little bit ignored over years. Now, obviously everybody is very conscious about it. And they're deploying manual ways to improve that. But if there is a systematic, you know, kind of a system level and automation-based approach that lets you make better use of the capacity you've deployed, it's better for the environment, you know, it's less carbon, you know, better for energy. So it's got many benefits. So it's a very positive approach, like you said. And that's what really excites us. And also it frees a lot of resources that can be used for a lot of, you know, new business applications there. I want to go back to the point of complexity as well. Kubernetes is known for being complicated. And the more I talk to people, the more I realize that this complexity is not going to go away. But there was some discussion on Twitter a few days ago. You may have all seen, you know, that what's next, you know, because this complexity is overwhelming developers, you know. So what are you, because, you know, once again, you folks, you know, platform line predates a lot of these technologies. So what are you seeing? Yeah, so I think, you know, the two most common ways I've seen in which people are tackling that complexity is, you know, if you look at public clouds, use of serverless components like Fargate and others is becoming more and more popular because it's one less thing for DevOps teams to manage, right? You don't have to manage the worker nodes with technologies like Fargate. And I think they're going to continue to be very popular. You know, what we want to do is we're providing the serverless aspects similar to Fargate with EMP while saving costs. But so I think, you know, if you don't have to deal with VMs, that's yet another operational complexity simplified. Then if you have a past layer that simplifies even a lot of the Kubernetes constructs that you have to deal with, that's another layer of simplicity. So I think there's a lot more innovation that's going to keep on happening to simplify the consumption of Kubernetes. But I think right now, there's a lot of, I think, ecosystem doing work in simplifying the management of Kubernetes infrastructure. Complexity is not going to go away, you know, and we discussed what role vendors like Platform9 can play to kind of, you know, once again the whole idea of Kubernetes also or other technology was, you know, to kind of, so that developers can leverage these technologies but don't have to deal with open source solves their zero-day-one problem. And that's why we see this big ecosystem. Vendors play a very good role, commercial players play a very big role in success of open source. So talk about what role companies like Platform9 have been playing and will continue to play to become an enabler of using these technologies. One of the biggest ways in which we thought that we would add value in the Kubernetes ecosystem, that's what we've been doing successfully now when we started adopting Kubernetes, is, you know, we want to take away the Kubernetes deployment and management pain, right? While not tying you to the public cloud resources only. So we've been traditionally providing, we pioneered deploying Kubernetes as a service. And the reason we did that is because we wanted you to experience Kubernetes like you do in a public cloud but while leveraging private cloud infrastructure, which is much cheaper, right? So we're continuing to do that. Now, again, coming back to the AIMO world there are MSPs, right? That are not the tier one hyperscalers, tier two MSPs that are getting almost a, I would say a second wind of popularity because there is such a high demand for GPUs that they're getting a lot of customer traction from being able to host, you know, these training workloads, etc. But they don't have expertise to provide Kubernetes and hiring that expertise and funding that is a pretty expensive under work. So we're looking to partner with them and provide our Kubernetes layer on top of their hardware, right? So again, it's a win-win and the end user benefits in terms of simplified complexity. When you talk about EMP, you know, it's EKS Correct. What about other cloud providers? Yeah, no, great question. So we're starting by focusing EMP with EKS, right? But EMP as a technology is not limited to AWS or EKS. So the next directions we are planning on going into is EKS is an obvious next choice. In fact, even here at the conference, we met with a lot of EKS, you know, enterprises running EKS and they could relate very well to the problem we were talking about and they were very excited for the possibility for a product like this. They were asking, you know, when can we get it for Azure? So that would be definitely the next direction we'd be going in. Potentially solving this problem just at the EC2 layer as a whole not necessarily only for EKS is another direction we're looking to go in. So it's got, you know, what I'm really excited about is this technology has much wider applications, you know, across hyperscalers. And are there any either specific industries or size of companies that you are catering to? We think that anybody who is any company with an annual EKS spend of about 250 K onwards, right? Would be a great fit for this kind of technology where they would actually benefit from the savings. If we could have those costs, you know, that would make a meaningful difference for your operational budget. So again, any company with an EKS annual spend of 250 K plus would be a good fit for this upwards of millions. There is no upper bound. And how can people get it started with puffinline.com? You will see MP as one of the listed products under the product menu. Just go to the website. You will find a lot more info, data sheet, a demo video, and you can then contact us. The best way is to book a demo. So just go to the website and book a demo and we can even give you a personalized consultation that tells you based on your environment, your parameters exactly how much money we could save to you. Of course, there are a lot of things in your pipeline that you folks are working on. You can or cannot talk about it, but just to give us a glimpse what next to expect from platform 9. Yeah, so you know what that, our heart, what really excites us is simplifying complexity in various different opportunities we could get, right? Complexity in terms of reducing your cost or your operational complexity. So I think you should expect us to continue to innovate in that space. AIML is something that's part of our focus. There's a huge potential in enabling enterprises to just make good use of GPU capacity across private and public cloud. So those are definitely areas we're going to continue to invest in. Madhura, thank you so much for taking time out today and give us a day down platform 9. It's good to see you for a long time, but that's not keep such a long gap. Let's talk frequently, but thank you so much. Awesome, something really great to connect with you, a long time and yeah, wish you the best and hopefully we'll talk again soon.