 Hi, this is your host Sapil Bhartiya and we are here at CubeCon in Chicago and today we have with us Murali Thirumalai, GM of Portworx. Murali is great to have you on the show again. It's really good to be here Sapnir. Since we are here at the event, I would love to hear from you a bit about what has been your experience so far. We've been in many different CubeCon's now. This is my seventh or eighth one. And so I've been involved with the Kubernetes movement right from the beginning as a co-founder of Portworx. And so the difference that I see this year is really that there is an air of innovation but also an air of maturity. The industry has really come to the point where you see large companies, industrial companies, who are deploying now at scale. So there are two differences. The scale of deployment means that customers are putting mission-critical applications on Kubernetes and are doing it in a large scale. The second thing is that many large, you know, traditional companies who you think of as not being innovators, right? You always find some companies in the Silicon Valley or things like that who are early adopters. But really, this CubeCon is to me an example where we've kind of crossed into the early majority both in terms of the number of companies deploying but also the types of applications at scale of deployment. So it's very exciting time for the Kubernetes community. And if you look at this Kubernetes community, this whole ecosystem, what role do you see of Portworx and also just give a quick kind of overview of, you know, because you folks move to your storage. So talk a bit about what Portworx is today and what role you are playing in this ecosystem. Portworx's role is very simple. In the ecosystem, Portworx are responsible for ensuring that applications and their data are deployed with high resilience reliably. So we provide things like persistent storage, we provide backup, we provide disaster recovery, and data services that are all managed underneath Kubernetes. So when customers deploy containerized applications, really, there's two parts to that application. There's the app and there's the data. And Kubernetes is used to orchestrate the app. Portworx is used to orchestrate the data. So we complete Kubernetes in allowing it to be managing not just the app but also the data. And as you were saying that, you know, the exciting thing was that, you know, maturity is also, the innovation is also there. Third thing I want to add to that is adoption. Talk a bit about what kind of trends you are seeing because Kubernetes has now moved into production. So when it comes to adoption, you folks have been around for a while. So what are you seeing? What are trends? So you know, I would say there are three significant trends that are happening. One of them is really all over. You can see it everywhere. And that is the creation of platform engineering. So Swapnil platform engineering is just the maturation of DevOps, right? So what platform engineering is now, it is taking what used to be a cultural phenomenon called DevOps, like let have developers and operations work together to now that has been formalized into an organization. They have a budget, right? And think about it as shadow Kubernetes IT is being consolidated now. People were doing tiny Kubernetes projects. There were islands of Kubernetes being done. Now all of that is being consolidated into one organization with the responsibility of making sure Kubernetes is deployed centrally and deployed well. The other part of that is that platform engineering is now responsible for a curated stack. They pick the technology stack and they recommend that and they manage that technology stack. So they provide, you know, Gartner had a great phrase for it. They called it paved roads. It's a way to ensure that the Kubernetes deployment that is being done by developers is always assured to run at scale, that it's going to be reliable. It has failover. It has guardrails to ensure that people don't misuse Kubernetes beyond its limits. They also allocate resources per, you know, the requirements of the application, provide billing, provide security. So it is really an example of what used to be happening with cloud IT 10, 15 years ago is now happening with Kubernetes. No, very well said. So that is one trend. The second trend is something that everybody is familiar with now is how AI ML and the Kubernetes stack, the cloud native stack are converging. Everybody of course now, this is every place I go, I hear AI ML, AI ML largely because of generative AI. But if you think about it, AI ML has certain characteristics. The models are changing. Data scientists are changing the models all the time. So rapid change. Second thing is people when they went to training or the needs, the number of the amount of data that's being brought into the system varies and the number of users varies. So it needs a very elastic system. Third, because data scientists don't understand computer computers and storage, they need a self service model. So all of this points to why containers and the Kubernetes model is ideal, because it provides self service for developers and data scientists. It provides a elastic kind of use model. And third, it provides the ability for people not to have to make copies of data. So if you think about it, the data is constantly changing in AI ML data is all important. So one way to ensure that the data is curated and managed properly is to have one copy that is available to everybody through a virtualized model, which is what Portworx provides. So what we're seeing is the second trend is the convergence of AI ML and the use of containers and Kubernetes in AI ML. And so these are two trends that are very, very common in that we're seeing all over in KubeCon. When we look at some of these trends, these trends are sometimes driven by innovation. At the same time, sometimes these trends lead to innovation. What does it these trends mean for Portworx? I will tell you one thing about I think not just Portworx, but most companies in the Kubernetes ecosystem. You know, people think that the innovation happens only at the software companies or at startups. The reality is that to require us to be able to introduce innovative capabilities, you need innovative customers who also are willing to kind of take these technologies on. So there are two innovators in this ecosystem, right? There is the creators of the technology and there are the users of the technology, which is and so Portworx, for example, has always created new capabilities with the co-creation model. We have lead customers who help create these new technologies with us. They are the leaders in their industry. So we've had customers like T-Mobile, for example, or Ford. There are customers who have created these new capabilities by working with us in helping to deploy them, helping to improve them and create that product market fit. So this is a wonderful ecosystem with both suppliers and users coming together to kind of create the new technologies of the future. I want to talk about people for a while, which means cultural changes. With these trends or the whole evolution of Kubernetes landscape, of course, we have been talking about DevOps, DevSecOps, sorry, platform you touch upon. Do you see the next wave of cultural change or do you see that platform engineering is where we have finished for a while? But we'll see in a bit. In the early days of Kubernetes, the technologies were evolving very fast and they were not necessarily as stable. Now, six, seven years later, the technology has stabilized. There's a lot of innovation going on, but there's also a lot of stability. The APIs are well defined. There's been consolidation in the industry. A lot of products have matured to the point where you can deploy them reliably. So what does this mean? Well, one thing that it means is that you don't have to be as much of an expert to be able to deploy some of these new Kubernetes stacks. The stacks are mature. They work pretty well. So the ability of customers to be able to develop applications and deploy these applications has gotten a lot better because their expertise needed is a little bit less. The second thing that has happened, which is very significant, is a lot of these technologies are being offered as a service. And when something is consumed as a service, Swapnil, you know this, then you need less expertise. A service is meant for ready-made use. It's off the shelf. And you can just use it without necessarily having to go under the covers. And so that's the other sign of maturity in the industry. A lot of our services have been offered as a service, including, for example, Portworx now is offering a lot of our storage DR backup as a service. The third thing that I would say is that more and more what I see happening is that like most mature technologies, it will become more and more invisible. People will use Kubernetes without knowing they're using Kubernetes in the future. It reminds me that in the past, people would talk about transistors and ICs. Now we have these things, you know, there's, you know, thousands of chips inside a car. There's probably 50 microprocessors and we don't even think about it, right? Because it's all invisible. So the goal is really to make Kubernetes something that is so powerful, but also more and more invisible so that people get the benefits of it without having to kind of having to know each aspect about it. We hear a lot of time that Kubernetes is too complicated, too complex, which is, you know, it was not designed to be, but there is a discussion that, you know, there was a Twitter thread a few days ago, you may have seen it too, was more about, you know, it's just too complicated for developers. What's next to make it easier? So when you do say it will become, what are you seeing that is there will be a successor, a new technology, or it just vendors will make it easier and invisible? I think it is about vendors making it easier and also operating much more with a, for example, a GUI interface or, you know, you don't really need to know all about the the APIs that Kubernetes offers to be able to kind of use the APIs, right? So a perfect example is in the in the early days of porkworks and for Kubernetes, people used to operate on the CLI. But today we provide, you know, use a user interface where people can just point and click drag and drop, right? All of these things are going to make it so that you it just becomes more invisible. I actually don't see Kubernetes being supplanted by something new for quite a while, right? Because it is really still on a very steep ramp of use. And the maturity of the technology means that more and more people are using it and are not asking us for new fundamental technologies. They're asking for more, you know, more of an ecosystem around Kubernetes. Kubernetes itself has become like a meta platform for other people to build data platforms like porkworks or security platforms or orchestration platforms. So Kubernetes is almost now a sub kind of met up technology now for other technologies to be built on top of it. I look at it as like the Linux kernel, you know, it is 30 40 years. And it's so it's not about the success, you know, that is so it has become a very foundational technology. Exactly. So yes, of course, there are vendors who are making it easy to use Linux. Same thing is there. One last question before we wrap this up is chat gpd or generative AI, you know, what role do you see of generative AI in the Kubernetes ecosystem or for porkworks? There are so many different ways that generative AI or any AI actually is going to become part of our ecosystem, right? The first and most simple obvious thing is for us to use it in our business, right? So as an example, porkworks is beginning to use generative AI to be able to improve our documentation, right? Very simple, improve our documentation, improve our test plans, improve the way that we actually create examples and demos for our products. So one, just using it inside of our business, second way to do that we're beginning to use it is use it in our products. So copilot is a very common version that we actually have now, for example, when you create and scripts within a porkworks product, you can now create a setup by just telling in common natural NLP, natural language, what kind of goal you're trying to accomplish with your setup and the copilot generates the code that is used by our product. So it's built in as an interface to our product. The third thing is obviously helping our industry, helping to enable chat gpd or other applications, right? And there again, I just talked about a little bit earlier how the nature of containers and Kubernetes, the fact that it is very elastic, the fact that it is very programmable and, you know, very, very resilient in terms of being able to kind of deal with various workloads, the fact that it is a self-service model, the fact that you can, by using technology like porkworks that virtualizes the underlying data, not have to make multiple copies of data, data curation and ensuring the provenance of the data is very important in AI. So all of these things are reasons why you would use a container and Kubernetes stack to be able to deploy, you know, LLM models, use just AI models and even ML models. So I think there's a natural convergence of how these things come together in our industry. Murli, thank you so much for taking time out today and I'm a great insights there, you know, and I love the way you talk about those trends and the role of culture and, of course, the role of porkworks in this ecosystem. Thanks for all those great insights and I would love to chat with you again. Thank you. Absolutely a pleasure. Thank you so much.