 Welcome back to theCUBE's coverage of HPE's GreenLake announcements. My name is Dave Vellante, and you're watching theCUBE. I'm here with Holger Moeller, who is an analyst at Constellation Research, and Matt McCow is the Global Field CTO of Esmeral software at HPE. We're going to talk data. Gents, great to see you. Great to be here. So Holger, what do you see happening in the data market? Obviously data's hot, digital. I call it the force marks to digital. Everybody realizes, wow, digital business, that's a data business. We've got to get our data act together. What do you see in the market is the big trends, the big waves? We're all young and old enough to remember when people were saying data is the new oil, right? Nothing has changed, right? Data is the key ingredient which matters to enterprises, which they have to store, which they have to enrich, which they have to use for their decision making. It's the foundation of everything. If you want to go into machine learning and iPart, it's growing very fast, right? We have the capability now to look at all the data in enterprise, which we were able 10 years ago to do that. So data is main center to everything. Yeah, it's even more valuable than oil, I think, right? Because with oil, you can only use once. Data, it's kind of polyglot. I can go in different directions and it's amazing, right? It's the beauty of digital products, right? They don't get consumed, right? They don't get fired up, right? And no carbon footprint. No, wait, wait, we have to think about carbon footprint, different story, right? So to get to the data, you have to spend some energy. It's that simple, right? I mean, it really is data is fundamental. It's got to be at the core. And so Matt, what are you guys announcing today and how does that play into what Holger just said? What we're announcing today is that organizations no longer need to make a difficult choice. Prior to today, organizations were thinking, if I'm gonna do advanced machine learning and really exploit my data, I have to go to the cloud. But all my data is still on premises because of privacy rules, industry rules. And so what we're announcing today through GreenLake Services is a cloud services way to deliver that same cloud-based analytical capability, machine learning, data engineering through hybrid analytics. It's a unified platform to tie together everything from data engineering to advanced data science. And we're also announcing the world's first Kubernetes-native object store that is hybrid cloud enabled, which means you can keep your data connected across clouds in a data fabric or, Dave, as you say, mesh. Okay, can we dig into that a little bit? So you're essentially saying that, so you're gonna have data in both places, right? Public cloud, Edge, on-prem. And you're saying HPE's announcing a capability to connect them. I think you use the term fabric. I'm cool, by the way, with the term fabric. We'll parse that out another time. I'd love for you to discuss tech stunts, fabrics, right? For me, every fabric breaks down in a mesh if you put it in a microscope. Oh, well, now that's really, that's too detailed for my brain right this moment. But you're saying you can connect all those different states, because data by its very nature is everywhere, you're gonna unify that. And what, that can manage that through sort of a single view? That's right, so the management is centralized. We need to be able to know where our data's being provisioned. But again, we don't want organizations to feel like they have to make the trade-off if they wanna use Cloud Surface A in Azure and Cloud Surface B in GCP. Why not connect them together? Why not allow the data to remain in sync or not throughout a distributed fabric? Because we use that term fabric over and over again. But the idea is let the data be where it most naturally makes sense and exploit it. Monetization is an old tool, but exploit it in a way that works best for your users and applications. In sync or not, that's interesting. So it's my choice. That's right, because the back of an automobile could be a teeny, tiny, small edge location. It's not always gonna be in sync until it connects back up with a training facility. But we still need to be able to manage that. And maybe that data gets persisted to a core data center, or maybe it gets pushed to the cloud. But we still need to know where that data is, where it came from, its lineage, what quality it has, what security we're gonna wrap around that. That all should be part of this fabric. Okay, so you've got essentially a governance model. Maybe you're working toward that. Maybe it's not all baked today, but that's the North Star. Is this fabric connects single management view governed in a federated fashion? Right, and it's available through the most common APIs that these applications are already written in. So everybody today's talking S3. I gotta get all of my data. I need to put it into an object store. It needs to be S3 compatible. So we are extending this capability to be S3 native, but it's optimized for performance. Today when you put data in an object store, it's kind of one size fits all. Well, we know for those streaming analytical capabilities, those high performance workloads, it needs to be tuned for that. So how about I give you a very small object on the very fastest disk in your data center and maybe that cheaper location somewhere else. And so we're giving you that balance as part of the overall management estate. Holger, what's your take on this? I mean, Frank Slutman says, well, never. We're not going halfway house. We're never going to do on-prem or only in the cloud. So that basically says, okay, he's ignoring pretty large market by choice. You're not, Matt. You must love those words. But what do you see as the public cloud players, kind of the moves on-prem, particularly in this realm? Well, we've seen lots of cloud players who were only cloud coming back towards on-premise, right? We call it the next generation compute platform where I can move data and workloads between on-premise and ideally multiple clouds, right? Because I don't want to be logged in into public cloud vendors. And we see two trends, right? One trend is the traditional hardware supplier of on-premise has not scaled to cloud technology in terms of big data analytics. They just missed the boat for that in the past. This is changing. You guys are a traditional player and changing this, so congratulations. The other thing is there's been no innovation for the on-premise tech stack, right? The only technology stack to run modern application has been invested for a long time in the cloud. So what we see since two, three years, right? The first one being Google with Kubernetes, that are good GKE on-premise, then Anthos, right, bringing that tech stack with compromises to on-premises, right? Acknowledging exactly what we're talking about that data is everywhere. Data is important. Data gravity is there. That's just the network's fault, why the networks are too slow, right? If we could just move everything anywhere if we want, like juggling to bars, then we'd be in different place. But that's not enough investment for the traditional IT players for that stack and the modern stack being there and now every public cloud player has an on-premise offering with different flavors, different capabilities. I want to give you guys Dave's story of kind of history and you can, of course, correct and tell me how this, Matt, maybe fits into what's happened to the customer. So, you know, before Hadoop, obviously you had to buy a big Oracle database and you know, running Unix and you buy some big storage subsystem, you've had any money left over, you know, you maybe, you know, do some actual analytics. But then Hadoop comes in, lowers the cost and then S3 kneecaps the entire Hadoop market, right? I wouldn't say that, I wouldn't agree. So, it's your history, right? Because the fascinating thing what Hadoop brought to the enterprise for the first time, you're absolutely right, affordable, right, to do that. But it's not only about affordability because S3 has the affordability. The big thing is you can store information without knowing how to analyze it, right? So, you mentioned stuff like that. No schema on right. Before it was like an Oracle database, it was a star schema for data warehouse and so on. You had to make decisions how to store that data because compute capabilities, storage capabilities were too limited, right? That's what Hadoop blew away. I agree, no schema on right, but then that created data lakes, which created data swamps and that whole mess and then Spark comes in and helps clean that. Okay, fine, so we're cool with that. But the early days of Hadoop, you had companies would have a Hadoop monolith, they probably had their data catalog in Excel or Google Sheets, right? And so now my question to you, Matt, is there's a lot of customers that are still in that world. What do they do? They got an option to go to the cloud. I'm hearing that you're giving them another option. That's right. So, we know that data's gonna move to the cloud, as I mentioned. So, let's keep that data in sync and governed and secured like you expect. But for the data that can't move, let's bring those cloud native services to your data center. And so, that's a big part of this. The announcement is this unified analytics so that you can continue to run the tools that you want to today while bringing those next generation tools based on Apache Spark using libraries like Delta Lake. So, you can go anything from Tableau through Presto SQL to advanced machine learning in your Jupyter notebooks on-premises where you know your data is secured. And if it happens to sit in existing Hadoop data lake, that's fine too. We don't want our customers to have to make that trade off as they go from one to the other. Let's give you the best of both worlds or as they say, you can eat your cake and have it too. Okay, so now let's talk about sort of developers on-prem. Right, if they really wanted to go cloud native, they had to go to the cloud. Do you feel like this changes the game? Do on-prem developers, do they want that capability? Will they lean into that capability? Or will they say, no, no, the cloud is cool? What's your take? I love developers, right? But it's about who makes the decision who pays the developers, right? So the CXOs and the enterprises, they need exactly this, why we call the next-gen computing platform that you can move the code assets. It's very hard to build software. So it's very valuable to an enterprise. I don't want to have limited to one single location a certain computing infrastructure, right? Luckily, we have Kubernetes to be able to move that, but I want to be able to deploy it on-premise if I have to. I want to be able to deploy it in the multiple clouds which are available. And that's the key part. And that makes developers happy too because the code you write is going to run multiple places. So you can build more code, better code, instead of building the same thing multiple places because a little compiler changed here, a little compiler changed there. Nobody wants to do portability testing and rewriting, re-certifying for certain platforms. So the head of application development or application architecture and the business are ultimately going to dictate that. Number one, number two, you're saying the developers shouldn't care because it can write once, run anywhere. That is the promise. And that's the interesting thing which is available now, capable now. Thanks to Kubernetes as a container platform and the abstraction which containers provide. And that makes everybody's life easier, but goes much more higher than the head of apps, right? This is the digital transformation strategy, the next generation application company has to build as a response to a pandemic, as a pivot, as digital transformation, as digital disruption capability. I mean, I see a lot of organizations basically modernizing by building some kind of abstraction to their back end systems, modernizing it through cloud native and then saying, hey, as you were saying, okay, run it anywhere you want or connect to those cloud apps or connect across clouds, connect to other on-prem apps and eventually out to the edge. Is that what you see? It's so much easier said than done, though. Organizations have struggled so much with this, especially as we start talking about those data intensive app and workloads. Kubernetes and Hadoop, up until now, organizations haven't been able to deploy those services. So what we're offering is part of these Green Lake Unified Analytics services, a Kubernetes runtime. It's not ours. It's top of branch open source and open source operators like Apache Spark, bringing Delta Lake libraries so that if your developer does want to use cloud native tools to build those next generation advanced analytics applications, but prod is still on-premises, they should just be able to pick that code up and because we are deploying 100% open source frameworks, the code should run as is. So it seems like the strategy is to basically build, now that's a Green Lake is, right? It's a cloud. There, here's your options. Use whatever you want. Well, and it's your cloud. That's what's so important about Green Lake is it's your cloud in your data center or colo with your data, your tools and your code. And again, we know that organizations are going to go to a multi or hybrid cloud location and through our management capabilities, we can reach out. If you don't want us to control those, not necessarily that's okay, but we should at least be able to monitor and audit the data that sits in those other locations, the applications that are running. Maybe I register your GKE cluster. I don't manage it, but at least throughout central pane of glass, I can tell the head of applications what that person's utilization is across these environments. You know, and you said something Matt that struck resonated with me, which is this is not trivial. I mean, not as simple to do. You see a lot of customers or companies, what they're doing vendors, they'll wrap their stack in Kubernetes, shove it in the cloud. It's essentially hosted stack, right? And you're kind of taking a different approach. You're saying, hey, we're essentially building a cloud that's going to connect to all these estates. And the key is you're going to have to keep and you are. I think that's part of the reason why we're here. Announcing stuff very quickly. A lot of innovation has to come out to satisfy that demand that you're essentially talking about. Because we oversimplified things with containers, right? Because containers don't have what matters for data and which matters for enterprise, which is persistence. I'd have to be able to turn my systems down or I don't know when I'm going to use that data, but it has to stay there. And that's not solved in the container world by itself. And that's what's coming now on the heavy listings done by people like HPE to provide that persistence of the data across the different deployment platforms. And then there's just a need to modernize my own premise platform, right? I can't run on a server which is two, three years old, right? It's no longer safe. It doesn't have trusted identity, all the good stuff that you need these days, right? It can be operated remotely or whatever happens there. Well, there's two, three years is long enough for a server to have run their course. Well, you're a software guy. You hate hardware anyway. Hardware isn't necessarily evil. You abstract that hardware complexity away from me. It's like TSA. Let me buy it. It's like Tia, let me buy it. I want to go somewhere, but I have to go through TSA. But that's a key point. Let me buy a service. If I need compute, give it to me. And if I don't, I don't want to hear about it, right? And that's kind of the direction that you're headed. That's right, isn't it? No, that's what you're offering. That's right. And specifically the services. So GreenLake's been offering infrastructure, virtual machines, IaaS as a service. And we want to stop talking about that underlying capability because it's a dial tone now. What organizations and these developers want is the service. Give me a service or a function like I get in the cloud, but I need to get going today. I need it with, in my security parameters, access to my data, my tools, so I can get going as quickly as possible. And then beyond that, we're going to give you that cloud billing practices. Cause just because you're deploying a cloud native service, if you're still being deployed via CAPEX, you're not solving a lot of problems. So we also need to have that cloud billing model. Great. Hogan, we'll give you the last word. Bring us home. It's very interesting to have the cloud qualities of subscription-based pricing maintained by HPE as the cloud vendor from somewhere else. And that gives you that flexibility. And that's very important because data is essential to enterprise processes. And there's three reasons why data doesn't go to the cloud, right? We know that. It's privacy residency requirements. There's no cloud infrastructure in the country. It's performance because network legacy plays a role, especially for critical IPers. And then there's not invented here, right? Remember Charles Phillips saying how all the CIOs are now if they're going to go to the cloud or not, right? So it's not invented here. These are the things which keep data on premise. You know that load and HPE is coming on. It's physics, it's laws, it's politics, and sometimes it's cost, right? It's sometimes too expensive to move and migrate. Guys, thanks so much. Great to see you both. Dave, it's always a pleasure. All right, and thank you for watching theCUBE's continuous coverage of HPE's big GreenLake announcements. Keep it right there for more great content. I love for you guys to-