 Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome back. This is Kube's coverage of KubeCon, CloudNativeCon 2019 in San Diego. I'm Stu Miniman with my co-host for the week, John Troyer, and happy to welcome to the program Tom Phelan, who's an HPE fellow and was the Blue Data CTO, now part of Hewlett Packard Enterprise. Tom, thanks so much for joining us. Thanks Stu. All right, so we talked with a couple of your colleagues earlier this morning about the HPE Container Platform. We're going to dig it a little bit deeper here. So set the table for us as to really the problem statement that HPE is looking to solve here. Sure, so Blue Data, which is what the technologies we're talking about, we addressed the issues of how to run applications well in containers in the enterprise. So what this involved is how do you handle security? How do you handle day two operations of upgrade of the software? How do you bring CI and CD actions to all your applications? This is what the HPE Container Platform is all about. So the announcement this morning, which went out was HPE is announcing the general availability of the HPE Container Platform, an enterprise solution that will run not only cloud native applications or typically called microservices applications, but also legacy applications on Kubernetes. And it's supported in a hybrid environment. So not only the main public cloud providers, but also on premise. And a little bit of divergence for HPE. HPE is selling this product, licensing this product to work on heterogeneous hardware. So not only HPE hardware, but other competitors hardware as well. That's good. One of the things I've been hearing really over the last year is when we talked about Kubernetes, it resonated for the most part with me. I'm an infrastructure guy by background. When I talk in the cloud environment, it's really talking more about the applications. And that's really, we know why does infrastructure exist? Infrastructure exists to run my applications. It's about my data. It's about my business processes. And it seems like that is a, really where you're attacking with this solution. Sure, this solution is a necessary portion of the automated infrastructure for providing solutions as a service. So historically blue data has been specializing in artificial intelligence, machine learning, deep learning, big data. That's where our strong suit came from. So we developed a platform that would containerize those applications like TensorFlow, Hadoop, Spark and the like, make it easy for data scientists to stand up some clusters and then do the horizontal scalability, separate compute and storage so that you can scale your compute independent of your storage capacity. What we're now doing as part of the HPE container platform is taking that same knowledge, expanding it to other applications beyond AI, ML and DL. And so what are some of those day two implications then? What are some of the folks run into that then now with a HPE container platform you think will eliminate those problems? It's a great question. So even though we're talking about applications that are inherently scalable, so AI and ML and DL, they are developed so they can be horizontally scalable, they're not stateless in the true sense of the word. When we say a stateless application, that means that there is no state in the container itself that matters. So if you destroy the container, reinstatiate it, there's no loss of continuity. That's a true stateless or cloud native application. AI and ML and DL applications tend to have configuration information and state information that's stored in what's known as the root storage of the compute node, what's in slash. So you might see per node configuration information in a configuration file in the Etsy directory. Today, if you just take standard off the shelf, Kubernetes, if you deploy Hadoop, for example, or TensorFlow, and you configure that, you lose that state when the container goes down. With the HPE container platform, we have been moving forward with a, or driving a open source project known as Kube Director. A portion of Kube Director of the functionality is to preserve that root storage so that if a container goes down, we are allowed, we are enabled to bring up another instance of that container and have it have the same root storage. So it'll look like just a reboot to the node rather than a reinstall of that node. So that's a huge value when you're talking about these machine learning and deep learning applications that have this state in root. All right, so Tom, how does Kube Director fit compared to Kube Director, does it kind of set aside something like Rook, which was talked about in the keynote, talking about being able to really have that kind of universal backplane across all of my clusters. Is this specific for AI and ML or is this? It's a great question. So Kube Director itself is a Kubernetes operator, okay? And we've implemented that, the open source community is joining in. But what it allows is Kube Director is application agnostic. So you could author a YAML file with some pertinent information about the application that you want to deploy on Kubernetes. You give that YAML file to the Kube Director operator, it will then deploy the application on your Kubernetes cluster and then manage the day two activity. So this is beyond Helm or beyond Kubeflow, which are deployment engines. So this also has, well, what happens if I lose my container? How do I bring the services back up and those services are dependent upon the type of application that's there? That's what Kube Director does. So Kube Director allows a new application to be deployed and managed on Kubernetes without having to write a operator in Go code makes it much easier to bring a new application to the platform. That's right. So Tom, kind of a two-part question. First part, so you are one of the co-founders of Blue Data now with HPE. Yes. There's, sometimes I think with technologies, some of them are kind of invented in a lab or in a grad student's head. Others come out of real world experience. Yes. And you're smiling because I think Blue Data was really built around, at least your experience was building these new data apps. This is 100% real world experience. So we were one of the real early pioneers of bringing these applications into containers. Truth be told, when Blue Data first started, we were using VMs. We were using OpenStack and VMware. And we realized that we didn't need to pay that overhead. It was possible to go ahead and get the same thing out of a container. So we did that and we suffered all the slings and arrows of how to make the security of the container to meet enterprise class standards. How do we automatically integrate with Active Directory and LDAP and Kerberos with single sign on. All those things that enterprises require for their infrastructure. We learned that the hard way through working with international banking organizations, financial institutions, investment houses, medical companies. So all our customers were those high demand enterprises. Now that we're a part of HP, we're taking all that knowledge that we acquired, bringing it to Kubernetes, exposing it through Cube Director where we can. And I agree there'll be follow on open source projects releasing more of that technology to the open source community. That was actually part two of my question is, okay, what about now with HP, the apps that are not AIML and you nailed it, right? All those enterprise requirements. Same problem that exists, right? There is secure data. You have secure data in a public cloud. You have it on premise. How do you handle data gravity issues so that you run your compute close to your data where it's necessary. You don't want to pay for moving data across the web like that. All right, so Tom, platforms abuse for lots of different things. Bring us inside, what do you feel from your early customers? Some of the key use cases that should be highlighted. Our key use cases were those customers who were very interested, they had internal developers. So they had a lot of expertise in-house. Maybe they had medical data scientists or financial advisors. They wanted to build up sandboxes. So we helped them stand up cookie cutter sandboxes within a few moments. They could go ahead and play around with them. If they screwed them up, so what, right? We tear them down and redo it within moments. They didn't need a lot of DevOps heavyweight lifting to reinstall bare metal servers with these complex stacks of applications. The data scientists said, I want to use this software which just came out of the open source community last week deployed in a container and I want to mess it up. I want to be really pushed the edge on this. And so we did that. We developed this sandboxing platform. Then they said, okay, now that you've tested this, I have it in QA, I've done my CICD, I've done my testing. Now I want to promote it into production. So we did that. We allowed the customer to deploy and define different quality of service depending on what tier their application was running in. If it was in test and dev, it got the lowest tier. If it was in CICD, it got a higher level of resource priority. Once it got promoted to production, it got guaranteed resource priority, the highest solution so that you could always make sure that the customer who was using the production cluster got the highest level of access to the resources. So we built that out as a solution. Cube Director now allows us to deploy that same sort of thing with the Kubernetes container orchestrator. Tom, you mentioned blue metal, blue metal, blue bare metal, we've talked about VMs. We've been hearing a lot of multi-cloud stories here already today, the first day of KubeCon. It seems like that's a reality out in the world. Can you talk about where are people putting applications and why? Well, clearly best practices today are to deploy virtual machines and then put containers in virtual machines. And they do that for two very legitimate reasons. One is there's concern about the security plane for containers. So if you had a rogue actor, they could break out of the container and if they're confined within the virtual machine, you can limit the impact of the damage. One very good reason for virtual machines. Also, there's a feeling that it's necessary to maintain the container state running in a virtual machine and then be allowed to upgrade the PROM code or the host software itself. So you want to be able to v-motion a virtual machine from one physical host to another and then maintain the state of the containers. What KubeDirector brings and what BlueData and HP are stating is we believe we can provide both of those functionalities on containers on bare metal, okay? We've spoken a bit about today and already about how KubeDirector allows the root file system to be preserved. That is a huge component of why v-motion is used to move the container from one host to another. We believe that we can do that with a reboot. Also, HPE Container Platform runs all virtual machines as reduced priority. So we're not giving root priority or privileged priority to those containers. So we minimize the attack plane of the software running in the container by running it as an unprivileged user and then tight control of the container capabilities that are configured for a given container. We believe it's just enough priority or just enough functionality which is granted to that container to run the application and nothing more. So we believe that we are limiting the attack plane of that through the, and that's why we believe we can validly state we can run these containers on bare metal without the enterprise having to compromise in areas of security or persistence of the data. All right, so Tom, the announcement this week, the HPE Container Platform available today. It will be, we are announcing it. It's a limited availability to select customers. It'll be generally available in Q1 of 2020. All right, and give us, we come back to KubeCon which will actually be in Boston next year in November. When we're sitting down with you and you say, hugely successful, give us some of those KPIs as to where your team's looking at. We're going to look at how many new customers, these are not the historic blue data customers, how many new customers have we convinced that they can run their production workloads on Kubernetes? And we're talking about, I don't care how many POCs we do or how many test and dev things, I want to know about production workloads that are the bread and butter for these enterprises that HPE is helping run in the industry. And that will be not only, as we've talked about, cloud native applications, but also the legacy J2EE applications that they're running today on Kubernetes. Yeah, I don't know if you caught the keynote this morning, but Dan Kahn runs the CNCF, was talking about a lot of the enterprises and equating them with second graders. We need to get over the fact that things are going to break and we're worried about making changes. The software world that we've been talking about for a number of years is absolutely things will break, but software needs to be resilient and distributed system. So what advice do you give the enterprise out there to be able to dive in and participate? It's a great question, we get it all the time. The first thing is identify your most critical use case that we can help you with. And don't try to boil the ocean. Let's get the container platform in there, we will show you how you have success with that one application and then once that's done, you'll build up confidence in the platform and then we can run the rest of your applications in the production. All right, well Tom Phelan, thanks so much for the updates. Congratulations on the launch of the HPE container platform. And we look forward to seeing the results. I hope you invite me back, this is really fun and I'm glad to speak with you today. Thank you. All right, for John Troyer, I'm Stu Miniman. Still lots more to go here at KubeCon, CloudNativeCon 2019. Thanks for watching theCUBE.