 From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Hi, I'm Peter Burris, analyst at Wikibon. Welcome to another Wikibon the CUBE digital community event. This one's sponsored by HP and focusing on hybrid storage. Like all of our digital community events, this one will feature about 25 minutes of video followed by a crowd chat, which will be your opportunity to ask your questions, share your experiences, and push forward the community's thinking on the important issues facing business today. So what are we talking about today? Again, hybrid storage. Let's get going. So what is hybrid storage? In a lot of shops, most people have associated the cloud with public cloud. But as we gain experience with the challenges associated with transforming to digital business, in which we use data as a singular value producing asset, increasingly IT professionals are starting to realize this important relationship between data, storage, and cloud services. And in many respects, that's really what we're trying to master today is a better understanding of how the business is going to use data to affect significant changes in how it behaves in a marketplace. And it's that question of behavior, that question of action, that question of location that is pushing business to think differently about how its cloud architectures are going to work. We're going to keep data proximate to where it's created, to where it's going to be used, to where it's going to be able to generate value, which demands that we have storage resources in place close to that data, proximate to that activity, near that value producing activity, and that the cloud services will have to follow. In many respects, that's what we're talking about when we talk about hybrid cloud today. We're talking about the increasing recognition that we're going to move cloud services to the data as the default and not move the data into the cloud, the public cloud specifically. So it's this ongoing understanding as we gain experience with this powerful set of technologies, that data architecture is going to be increasingly distributed, that storage therefore will be increasingly distributed, and that cloud services will flow to where the data is required, utilizing storage technologies that can best serve that set of workload. So it's a more complex world that demands new levels of simplicity, ease of use, and optimization. So that's where we're going to start our conversation. So these crucial questions of how data, storage, and cloud are going to come together to create hybrid architectures was the basis for a great cube conversation between SiliconANGLE Wikibon's David Volante and HPE's Sundip Aurora. Let's hear what they had to say. Talk about, let's talk about the breakdown of those three things, cost efficiency, ease of use, and resource optimization. Let's start with cost efficiency. So obviously there's TCO. There's also the way in which I consume. The people I presume are looking for a different pricing model. Is that, are you hearing that? Yeah, absolutely. So as part of the cost of running their business and being able to operate like a cloud, everybody's looking at a variety of different procurement and utilization models. One of the ways HPE provides utilization model that can map to their cloud journey, a public cloud journey, is through GreenLake. The ability to use and consume data on demand, consume compute on demand, across the entire portfolio of products HPE has, essentially is what a GreenLake journey looks like. And let's go into ease of use. So what do you mean by that? I mean people, they think cloud, they think swipe the credit card and start deploying machines. What do you mean by ease of use? For us, ease of use translates back to how do you map to a simpler operating and support model? For us, the support model is the key for customers to be able to realize the benefits of going to the cloud. To get to a simpler support model, we use AIOps. And for us, AIOps means using a product called InfoSight. InfoSight is a product that uses deep learning and machine learning algorithms to look at a wide net of call home data from physical resources out there, and then be able to take that data and make it actionable. And the action behind that is predictiveness, the prescriptiveness of creating automated support tickets and closing automated support tickets without anybody ever having to pick up a phone and call IT support. That InfoSight model now is being expanded across the board to all HPE products. It started with Nimble. Now, InfoSight is available on three parts. It's available on Synergy. And a recent announcement said it's also available on ProLiance. And we expect that InfoSight becomes the glue, the automation AI glue that goes across the entire portfolio of HPE products. So this is a great example of applying AI to data. So it's like call home taking to a whole new level, isn't it? Yeah, it absolutely is. And in fact, what it does is it uses the call home data that we've had for a long time with products like 3-par, which essentially was amazing data, but not being actioned on in an automated fashion. It takes that data and now it creates an automation task around it. And many times that automation task leads to a much simpler support experience. All right, the third item you mentioned was resource optimization. Let's drill down into that. I infer from that there are performance implications. There's maybe governance, compliance, physical placement, can you elaborate? That's in color. Yeah, so I think it's all of the above that he just talked about. It's definitely about applying the right performance level to the right set of applications. We call this application aware storage. The ability to be able to understand which application is creating the data allows us to understand how that data needs to be accessed, which in turn means we know where it needs to reside. One of the things that HPE is doing in the storage domain is creating a common storage fabric with the cloud. We call that the fabric for the cloud. The idea there is that we have a single layer between the on-premises and off-premises resources that allows us to move data as needed depending on the application needs and depending on the user needs. So this crucial new factors that have to be incorporated into everyone's thinking of cost efficiency, ease of use and resource optimization is gonna place new types of stress on the storage hierarchy. It's gonna require new technologies to better support digital transformation. David Floyer, an analyst here at Wikibon, has been a leading thinker of the relationship between the storage hierarchy and workloads and digital thinking for quite some time. I had a great conversation with David not too long ago. Let's hear what he had to say about this new storage hierarchy and the new technologies that are gonna make possible these changes. David, you've been looking at this notion of modern storage architectures for 10 years now and you've been relatively prescient in understanding what's gonna happen. You were one of the first guys to predict well in advance of everybody else that the crossover between Flash and HDD was gonna happen sooner rather than later. So I'm not gonna spend a lot of time quizzing you. What do you see as a modern storage architecture? Let's just let it rip. Okay, well let's start with one simple observation. The days of standalone systems for data have gone. We're in a software-defined world and you want to be able to run those data architectures anywhere where the data is. And that means in your data center where you've just created or in the cloud or in a public cloud or at the edge. You want to be able to be flexible enough to be able to do all of the data services where the best place is and that means everything has to be software-driven. Software-defined is the first proposition of a modern data storage architecture. Absolutely. Second. So the second thing is that there are different types of technology. You have the very fastest storage which is in the DRAM itself. You have NV-DIM, which is the next one down from that. Expensive but a lot cheaper than the DIM. And then you have different sorts of flash. You have the high-performance flash and you have the 3D flash, you know, as many layers as you can, which is much cheaper flash. And then at the bottom you have HDDs and even tape. As storage devices. So the key question is, how do you manage that sort of environment? Well, let me stop because it still sounds like we still have a storage hierarchy. Absolutely. And it still sounds like that hierarchy is defined largely in terms of access speeds and price points. Price points, yes. Those are the two Mason. And bandwidth and latency as well within that. Which are tied into those. Which are tied into those, yes. So what you, if you're going to have this everywhere and you need services everywhere, what you have to have is an architecture which takes away all of that complexity. So that you, all you see from an application point of view is data and how it gets there and how it's put away and how it's stored and how it's protected. That's under the covers. So the first thing is you need a virtualization of that data layer. The physical layer. The virtualization of that physical layer, yes. And secondly, you need that physical layer to extend to all the places that may be using this data. You don't want to be constrained to this dataset lives here. You want to be able to say, okay, I want to move this piece of programming to the data as quickly as I can. And that's much, much faster than moving the data to the processing. So I want to be able to know where all the data is for this particular dataset or file or whatever it is, where they all are, how they connect together, what the latency is between everything. I want to understand that architecture and I want to virtualize view of that across that whole, the nodes that make up my hybrid cloud. So let me be clear here. So we are going to use a software defined infrastructure that allows us to place the physical devices that have the right cost performance characteristics where they need to be based on the physical realities of latency, power availability, hardening, et cetera. And the network. And the network, but we want to mask that complexity from the application developer and application administrator. And software defined helps do that but doesn't completely do it. Well, you want services which say... Exactly. So there's services on top of all that that are recognizable by the developer, by the business person, by the administrator as they think about how they use data towards those outcomes, not use a storage or use a device but use the data. To reach application outcomes. That's absolutely right. And that's what I call the data plane, which is a series of services which enable that to happen and driven by the application requirements. So we've looked at this and some of the services include end to end compression, deduplication, backup restore, security, data protection. So that's kind of the services that now the enterprise buyer needs to think about so that those services can be applied by policy wherever they're required based on the utilization of the data where the event takes place. And then you still have, at the bottom of that, you have the different types of devices. You still have... You still want... You want hamsters making stuff. You still want hard disks, for example. They're not disappearing. But if you're going to use hard disks, then you want to use it in the right way for using a hard disk. You want to give it large blocks. You want to have it going sequentially in and out all the time. So the storage administration the physical schema and everything else is still important in all of this. But it's less important, less the centerpiece of the buying decision. Increasingly, it's how well does this stuff support the services that the business is using to achieve their outcomes? And you want to use the lowest cost that you can. And there will be many different options open, more and more options open. But the automation of that is absolutely key. And that automation from a vendor point of view, one of the key things they have to do is to be able to learn from the usage by their customers across as broad a number of customers as they can. Learn what works, what doesn't work. Learn so that they can put automation into their own software, their own software services. So it sounds like we were talking four things. We got software defined, still have a storage hierarchy defined by cost and performance, but with mainly semiconductor stuff. We've got great data services that are relevant to the business and automation that masks the complexity from everything. And a lot of the artificial AI there is automated. Running things, fantastic. So David's thinking on the new storage hierarchy and how it's going to relate to new classes of workload is a baseline for a lot of the changes happening in the industry today. But we still have to turn technology into services that deliver higher levels of value. Once again, let's go back to Dave Vellante's conversation with Sundip Aurora and hear what Sundip has to say about some of the new digital services, some of the new data services that are going to be essential to supporting these new hybrid storage capabilities. We have, and what it does is it gives us opportunity now not just to look at call home data from storage, but then also look at call home data from the compute side. And then what we can do is correlate the data coming back to have better predictability and outcomes on your data center operations as opposed to doing it at the layer of infrastructure. And you also set out a vision of this orchestration layer. Can you talk more about that? Are we talking about across all clouds, whether it's on-prem or at the edge or in the public cloud? Yeah, we are. We're talking about making it as simple as possible where the customers are not necessarily picking and choosing. It allows them to have a strategy that allows them to go across the data center, whether it's a public cloud, building their own private infrastructure or running on a traditional on-premises sand structure. So this vision for us, cloud fabric vision for us, allows for customers to do that. And what about software defined storage? Where does that fit into this whole equation? Yeah, I'm glad you mentioned that because that was the third tenant of what HPE truly brings to our customers. Software defined is something that allows us to maximize the utilization of the existing resources that our customers have. So what we've done is, we've partnered with a great deal of really strong software defined vendors, such as Commvault, Cohesity, Cumulo, Deterra. We work very closely with the likes of Veeam, Zerto. And the goal there is to provide our customers with a whole range of options to drive building a software defined infrastructure built off the Apollo series of products. Apollo servers or storage products for us are extremely dense storage products that allow for both cost and resource optimization. So Sundit made some fantastic points about how new storage technologies are going to be turned into usable services that digital businesses will require as they conceive of their overall hybrid storage approach. Here's an opportunity to hear a little bit more about what HPE thinks about some of these crucial areas. Let's hear what they have to say. In this Chalk Talk Short Take, I'm gonna introduce you to HPE Primera Storage. If you want the agility of the public cloud but need the resiliency and speed of high-end storage for mission-critical applications, this forces a trade-off of agility for resiliency. High-end storage is fast and reliable, but falls short on agility and simplicity. What if you could have it all? What if you could have both agility and resiliency for your mission-critical apps? Introducing the world's most intelligent storage for mission-critical apps, HPE Primera. It delivers an on-demand experience so storage is instantly available. Apple wear resiliency backed with 100% availability guarantee. Predictive acceleration so apps aren't fast some of the time but fast all the time with embedded AI. Let me tell you more about HPE Primera. Who is engineered to drive unique value in high-end storage? There are four areas we focus on. Global intelligence, powered with the most advanced AI for infrastructure, InfoSight, and all-active architecture with multiple nodes for higher resiliency and limitless parallelization. A service-centric OS that eliminates the risk and simplifies management and timeless storage with a new ownership experience that keeps getting better. To learn more, go to hpe.com slash storage slash Primera. So that's been a great series of conversations about hybrid storage and I wanna thank Sundep Arora of HPE, David Floyer of Wikibon Silicon Angle, Jim Cabellus of Wikibon Silicon Angle, and my colleague David Volante for helping out on the interview side. I'm Peter Burris and this has been another Wikibon theCUBE digital community event sponsored by HPE. Now stay tuned for our crowd chat which will be your opportunity to ask your questions, share your experiences, and push forward the community's thinking on hybrid storage. Once again, thank you very much for watching. Let's crowd chat.