 Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome back, this is theCUBE's fourth year of covering KubeCon and CloudNativeCon. This is the North American show here in San Diego, it's 2019, he is John Troyer, I am Stu Miniman, and happy to welcome to the program. First of all, I have Merley Tillumili, who's the co-founder and CEO of Portworx, and Merley, thank you so much for bringing one of your customers on the program, Satish Purnim, who's a technical specialist with Ford Motor Company. Gentlemen, thank you so much for joining us. Delighted to be here. All right, so Satish, we're going to start with you because the growth of this ecosystem has been phenomenal. There were end users up on the main stage, we've already had them, there's over, there's 129 now CNCF end user participants there, but bring us in Ford, we're getting ready for this, we're talking, there's so much change going in from, of course everybody talks about autonomous vehicles and what they have, but technology has really embedded itself deeply into a company like Ford, so before we get into all the KubeCon, just bring us a little into your world, what's happening, changing, and what your team does. Sure. Ford generally has been a transformation journey for about the last two years now. That includes like completely reading our data centers, our application portfolio. As part of this migraine journey, we started our journey with Cloud Foundry, we have been a huge pivotal Cloud Foundry shop for some time, and then we also would like to start diabbling with Kubernetes and things associated technologies, primarily looking for data services, messaging services, a lot of the stateful things, right? Cloud Native, I'm like Kubernetes, and I'd say more of Cloud Foundry, I'm sorry, did great wonders for us for 12-fact graphs, so what do we do with like, stateful things, and that's what we started diabbling with Kubernetes and things like that. Satish, if I could, I want to step back one second here and say, you know, you're doing transformation, consolidation, moving from monoliths to microservices, what was the business driver here? Was it one day some executive got up and said, you know, hey, this sounds really cool, go do it? Or, you know, was there a specific driver in the business that now your organization needs to respond to? I think the business driver is cost efficiencies. I'm like, there were like a lot of things that we have not done, so there was a lot of technical that we have to pay down because of various fragmentation and various other things. So it's always about realizing business efficiencies and most importantly, speed at which we deliver services to our customers internally. So that was the main driving force for our engaging and this transformation journey for like both last few years. Okay, Merley, we'd love to bring you into this conversation here. Obviously, agility is one of the things I hear most from customers, the driver of what new things. Infrastructure for the longest time, in many ways, it was the boat anchor of what held us back, especially, you know, our friends in networking and storage, it is difficult to change and keep up with what's driving there. So bring us up to speed with Portworx and how you fit into Ford and more broadly. Yeah, just a quick introduction to Portworx. We've been around for about five years now, right from the early days of containers and Kubernetes. And you know, we have quite a few customers now in production. We have about 130 customers, 50 of the global 2K and so on. Many, almost all of those customers are in production, deploying significant workloads. The interesting thing about Kubernetes in the last couple of years especially is that everybody recognizes that it's won the war for orchestrating containers and applications. But the reality is the customer still has to manage the whole stack, the stack of not just the app, but the data itself underneath. And that's kind of the role of Portworx. Portworx is the platform for storage for Kubernetes and we orchestrate all the underlying storage and the data applications. With that being said, I think one of the things that we've seen that Ford has kind of led the way in and has been really amazing is some of the many surprising things that people don't really know about Kubernetes which has been happening now with customers like Ford for a while. One of them, for example, is just the use of Kubernetes in on-prem applications. Very few people really kind of, they think of Kubernetes as something that was born in the cloud and therefore has kind of really only mushroomed in the cloud. And one of the key things about Kubernetes and most of our customers are actually on-prem. And it to me is transforming the data center. The agility that Satish speaks about is something that you don't just need because you're operating in the cloud, you need that for all of your on-prem applications too. And that's been one of the unique characteristics that we've seen from Ford. Yeah, and you talked about your journey, Satish. The pivotal folks really talked a lot about the transformation and agility, no matter where your apps were sitting. I'm kind of curious in terms of the storage and the statefulness of the applications that you're working with now. What kind of a, if I looked at it, a diagram, what kind of a setup would there be? So there's a Portworx layer underneath and beside Kubernetes that's managing some of the storage and some of the replication. Is it then, is the data sitting on a sand somewhere? Is it sitting in the cloud? I mean, can you kind of describe what a typical application would look like? I'm like, go, typical application. Yes, we draw storage, we've been drawing storage for several years from NetApp as being as the primary source of our data. And then we run on top of that, we run some kind of storage overlays. We dabble with quite a few technologies including Rook, NetApp Trident, Gluster, you know, I'm like, it was a journey. It's a journey that we took us to, I'm like, ultimately led us to Portworx and we just getting started with Portworx. But the first aspect has been the gravity that the storage brings along with it. And all the things that are cloud native is great but cloud native has stayed somewhere and that has to be managed someplace. And we said, hey, can we do that with Kubernetes, right? So I think we have done, I won't say an outstanding job but at least we have done a reasonably good job at actually at least wrapping our heads around it. And we have quite a few workloads in production that are actually stateful, whether they are like build systems, they're also like data messaging systems, many cards applications and all that stuff. So that has been something that we've been working on for the past few years on our platforms at least. Yeah. Merlin, I wonder if you could expand a little bit on kind of the application suite. You know, what can we do? What can't we do? Listening to the keynote this morning, I definitely heard it was, if you look at a multicluster environment, you know, you want to mirror and have the same things there. Well, I can't just magically have all the data everywhere and you know, data has gravity and the laws of physics still apply. So I can't just automatically replicate terabytes from here to the cloud or back. Yeah. So help us understand where we are. So you know, one of the things Satish told me yesterday, which I love, is he's saying, he said, stateful is almost easier than stateless now because of the fact that we have these extensions of Kubernetes. So one of the things that's been very, very impactful is that Kubernetes has now these extensions for managing, you know, storage, networking and so on. And in fact, the way they do that is through an API that just an overlay. So we are an example of an overlay. And so think about it this way. If a customer, about 60% of our customers are building a platform as a service. In many cases, they don't even know what applications are going to be in there. So over our customer base, we see the same alphabet soup over and over and over again. Guess what it is? Postgres, Cassandra, so all the databases, Redis, right? You know, all of the messaging cues, right? Things like Kafka and, you know, streaming data, for example, Spark workloads. And so one of the key things that is happening around with customers, particularly on the enterprise side, like large enterprises, they are using all kinds of applications and they're all stateful. I mean, there's very few enterprises that are not stateful and they're all running on some kind of a storage substrate that has virtualized the underlying storage. So we run on top of the underlying hardware, but then we're able to kind of work with all of the orchestration that Kubernetes provides, but we are adding the orchestration of the data infrastructure as well as the storage itself. And I think that's one of the key things that's changed with Kubernetes in the last, I would say two and a half years is most people used to think of it as in the cloud and stateless, but now it's on-prem and stateful. Satish, you know, one of the things we've talked to customers is their journey of modernizing their applications. There's things that you might build brand new and are great here, but, you know, I'm sure you have thousands of applications and, you know, going from the old way to a brand new thing, there's lots of different ways to get there. Some of it you might need to just, where are you with the journey of getting things onto this platform layer that we're talking about and what will that journey look like for Ford? Net new apps, anything being new, we're talking about writing in like cloud native, 12-factor apps, like, but anything new, I'm like, anything existing, data services, messaging services, what we affectionately call as like table-stake services, right? So, which all the 12-factor apps rely on, we're targeting two worlds, Kubernetes. The idea is, are we there yet? Probably known, like, we're getting there with, along with our partners, to put it on the platforms like Kubernetes, right? So, we are also doing a lot of automation and orchestration on VMs itself, but the idea is heavy and heavier workloads are going to be landing on Kubernetes platforms, and there will be a lot of work in the upcoming years, particularly 2020, where we will be concentrating more on those things, and the continuing growth would be on 12-factor Net new, will be 12-factor Net new, could be in Cloud Foundry, could be in Kubernetes, time will tell, but that's the guiding philosophy, so to speak, but there's a lot that we have to learn in this journey right now. I was kind of curious about that, Satish. I mean, we talked about an alphabet soup, you talked about a lot of different projects, and certainly here at KubeCon, the thing about the Cloud Native Community Foundation is that not that they don't have opinions, but everybody has an opinion. There's lots of different components here. It's not one stack. It's a collection of things that could be put together in several different ways. So you've tried a bunch of different things with storage. I'm actually, I'm interested if there are, if there were kind of surprises, or containerized activity is probably different than IO activity and storage IO is probably different than in a virtual machine. The storage itself has some different assumptions built into it. So like, do you have any advice for people? I'm interested in the storage case, but also just you're going to have to evaluate networking and security and compliance and a lot of different things. Like how do you go about approaching this sort of evaluation and this trial and this journey of when you are facing an alphabet soup of options? I think it all comes down to basic engineering, right? So I think about what are your failure points? I'm like, could be servers failing, infrastructure, hardware failing, right? So the basic tenets is that we try to introduce failure as early as possible. Like what happens if you pull the wire? Like what happens if the server failure happens? The question always comes back is that is there a way I can compose the same infrastructure so that I can spread it across couple of failure domains? I think that was the whole idea of when we started can we decompose the problem such that we can actually take advantage of primitives that are baked into Kubernetes? The great thing with CSI that we are just realizing before that we're all flex drivers, but how do you actually organize storage in the backend that actually allows you to actually compose this thing on the front end using the Kubernetes primitives? I think that was the approach that we talked. And CSI is a standard API. Correct. Yeah, storage API, yeah. Exactly, I'm like that's what we are relying, we are hoping that that is going to help us with things like moving compute to the storage rather than moving storage to the compute. So that's one of the evolving thinking that we are working with, Portworx, we've been working with the community folks from ROK and a couple of other areas. There's a lot to be done here. I'm like we are just in still very early days, I would say. All right, Murli, want to make sure we get out there. Portworx had some updates for this week, so why don't you share the latest? The updates actually relate to exactly what Satish was talking about. The idea of, so container storage has kind of been on its own journey, right? In the early days that John remembers well, it was really providing storage persistent, making that data available everywhere. It's then clearly moved to HA being having a high availability, say within the cluster and so on. But the data lifecycle for the application that's been containerized extends well beyond that. So we are making extensions to our own product that are kind of following that path. So one of the things we launched a few months ago was disaster recovery, DR, which is very, very specific to containers. So container granular DR, so you can kind of take a snapshot, not just of the data, but of the application state as well as the Kubernetes pod spec and recover all three of them. At this KubeCon, we're announcing two other things. One of them is backup. So our customers, as they make the journey through their app lifecycle, inevitably they need to back up their data and we have again, container granular backup that we're providing all by the way on existing storage. We're not asking anybody to up change their hardware storage substructure. The last thing we're introducing is storage capacity management, which is fully automated. You know, one of the characteristics of Kubernetes is all of that is get the person, get the trouble ticket out of the picture, right? The world is going to be automated. Kubernetes is one of the ways people are doing that. And what we have provided is the ability to auto resize volumes and auto resize pods of storage and add more nodes automatically based on policy that is completely automated. So that again, these applications, one of the characteristics of containerized workloads is they're unpredictable. They go up and down and they grow very fast sometimes. And so all of that management, so autopilot backup, DR have now been added in addition to persistent storage and H8. All right, so before I let you both go, want to talk about 2020. So Satish, I want to give you a wish. You talked about all the things you've do the next couple of years. If you could get one thing more out of this ecosystem to make your lives easier for you and your team, you know, what would that be? I think standardization on more of these interfaces. Kubernetes provides a great platform for everybody to interact equally. More things like CSI, CRI stuff that's happening in the community. And most standardization will lead to actually make my life and things, and then fries is a lot more easier. We'd like to see continue that happening. GPU space that looks very interesting. So we'll see, that will be my wish at least. All right, so Merli, I'm not giving you a wish. You're going to tell me, you know, what should we be looking for from Portworx and participation in this community over the next year? I think one of the big changes that's happened really in the last couple of years, but it's really kind of achieving a hockey stick, is that enterprises are recognizing that stateful apps are really, should be using Kubernetes and can use Kubernetes. So to me, what I predict is that I think Kubernetes is going to move from not, from just managing applications to actually managing infrastructure, like storage. And so I, you know, my belief is that 2020 is the beginning of where Kubernetes becomes the control plane across the data center and cloud. It's the new control plane, you know, what OpenStack was aspiring to be many years ago, and that it will be looking upwards to manage applications and downwards to manage infrastructure. And you know, it's not just us who are doing that. Folks like VMware with Project Pacific have kind of, clearly kind of kind of indicated that that's the direction that we see. So I think its role is now much more than just an app orchestrator. It's really going to be the new control plane for infrastructure and apps in the enterprise and in the cloud. Merley, Satish. Thank you so much for sharing all the updates. Pleasure to catch up with both of you. Thanks. Northbound, southbound, multi-cloud. theCUBE is at all of these environments and all the shows. For John Troyer, I'm Stu Miniman as always. Thank you for watching theCUBE.