 We're back here inside the Cube. This is Silicon Angles flagship program. We've got the Advanced Extracted Silicon from the West. We're in Las Vegas. This is day two. Towards the back end of day two of three days of live wall-to-wall coverage. I'm John Furrier, Dave Vellante at Wikibon.org. And I got to say, we go where the stories are and there's a lot of action. We just talked the cloud. Now we're talking the storage. Dave, what's your take here and choose our next guest? Well, so we're seeing a lot of enthusiasm from HP. We're at Dave Donatelli this morning. I tweeted out, John, that Dave was relaxed. He was concise, clear, and funny. You know, it's not really... Yeah, let him retweets on that. Yeah, I think I did, actually. So, and our next guest is great. Milan Shetty is here. He's the vice president and CTO of HP Storage Milan. Welcome. Thank you. Thank you, Dave. So, John. Good to see you. I just saw you recently. We did a deep dive on the announcement that you guys made today, which was awesome. You locked us in a room for eight hours and injected us with Kool-Aid. We came out, we were still alive, but so great announcement. You guys are very enthusiastic about it. So we're going to talk about that, the three-power all-flash array, some of the other innovations that you guys are doing. But again, welcome to theCUBE. Thank you. Thank you, Dave. Thanks again for having me here. Yeah, so let's start with the announcements today, the all-flash array that's been anticipated. You guys sort of indicated that's the direction that you're going. We heard Dave Donatelli today talk about the trade-offs that customers had before you guys came in. So why don't you recap that from a strategy standpoint? What you guys are trying to solve there? Sure. So when we look at the fact that the new media flash has come into the marketplace and it's a magnitude of order, much faster and it's got low latency than the disks. And how would people use it? And that's who we have been thinking about that from an innovation standpoint quite a bit. And there are three things which came to mind when we did all of the investigation. First is that how can we apply tier one features on flash and get all the data services on the flash stack so that people don't have to worry about the endurance aspect of the flash and also get the maximum performance they can from the flash. So that was one aspect. The second aspect of it was that it's not all about just the performance and the latency. How do we address the feature sets which go along which people know about the tier one functionality and get that going. And third, how can we make it cost effective and also provide the resiliency and the cost factors as well as the endurance factors and how do we balance those equations. There are, so we saw in the marketplace that were pure high speed performance, you know, having like a race car but no features at all, no rear view mirrors and no tier one features, a new management paradigm that kind of market is going on. And the second, there were people who had the two controller node architectures and they were just inserting flash in their existing architecture and it was not performing as well. So the question was that how do we bring in and multi-node architecture improve the efficiency of the multi-node architecture in terms of getting the best latency as we can and provide all the tier one services, snapshot, replications and just name it which we've been delivering on our platform for a very long time and how do we put all those three things together was a key element and that we brought that with the announcement which we did today. So that's what you mean by features? You're talking about a rich storage management stack that you've built out for a decade because the startups can say, hey, we have features, we have compression, we have deduplication, maybe some will say, oh, we am a stack from another third party. So double click on that a little bit. What features are we talking about? Absolutely. So if you look at, when people talk about the endurance of the flash and everything, right? So we have thin provisioning capabilities built in into a three-part stack, right? So first of all, don't do as many writes unless it's necessary. Provision, in the process of provisioning, don't do writes to the flash array. Then we have thin reclamation and the thin detect capabilities that if we detect the, there are a bunch of zeros, we don't even send that to the array at all. So we minimize writes. So it's all the data reduction techniques which already existed. Then there are richer services around snapshot, cloning, and white striping of three-part so that they make sure that all the IO goes across more than two controllers. So it's a four-controller architecture. It controllers scale out architectures and just spread the load around across all the resources available so that you write fast, but not only write fast, but write only what is needed and absolutely needed. And that along gets the life of the endurance and it's all built in. It's all built into the package and it's the same software stack from an architecture standpoint, for an high-end array, the low-end array, with the 7200, 7400, the mid-range array, as well as the all-flash array. So getting the uniformity of management and also these rich data services which already do data reduction techniques so that you're not wearing out your flash are very critical component and we built that all in the controller and now it's available in the all-flash array. But so you guys made a lot of momentum in the marketplace, made a lot of momentum around the notion that you had a modern architecture. I mean, you attacked traditional tier one and really did a very good job at that. But so people are going to look at this and say, well, wait a minute, Milan. You guys, this is a disk-based architecture. You designed this to solve the problems of spinning disk and now you're bolting on flash. What do you say to that? Yeah, absolutely. So we supported SLC and MLC drives in our three-part stack for a very long time. What is unique about the 74-ply or 50-platform release disk that we actually optimize the three-part stack to make sure that it reads the media as a different media than the disk. So we optimized a whole bunch of stuff in the how the internal buffers are allocated, how the IO stack works on the SCSI layer, how the IO stack works on the Fabrician. There's a lot of software. And I would go to say there is about somewhere around 500 to 600,000 lines of code in the software stacks, which are the standard software stack, like the fiber channel stack and the ice-coated stack, which are not needed in flash. So we optimize the entire code path. So when we released the flash optimized 7450, there are a lot of work went into not only on the multi-node architecture stuff which you can leverage, but also optimizing the stack so that the IO path, so it's not like a wide, I use the analogy of the wide-water rafting, kind of the data is going back and forth with the different buffers and different buffers between the fiber channel stack, the ice-coated, the SCSI stack, and down. So we optimize the entire code path, which is why the latency which we are getting is one of the best latency we are getting from the read standpoint and the write standpoint. It's only going to get better. So that optimization to your point about the, whether it was, whether flash is used as a different media and all the algorithms which existed and all the IO path which existed in the standard fiber channel stacks and the SCSI stacks, we optimized all of that. And that's about half a million lines of code path which has been optimized and which is why we're getting the throughput and the latency which we are standing behind. And the spec numbers which we drew out has been the best in the industry, which we came out of it. But again, to summarize, it's the optimization of the code path in the standard for protocol stack without the applications knowing what's happening has been very free to how we delivered these numbers. Now another point that we debated and let's sort of have a summary of that debate here was the cost. So you got this architecture, got an ASIC in it, a lot of benefits to the ASIC but it's not a blank sheet of paper architecture. It's not a scale out shared nothing architecture. We had that kind of interesting discussion. Why are you confident that you can compete on cost with the startups? Sure, so there isn't, so first of all, I think all the primitives and the design center which 3Power was built on. So if you look at the history of 3Power, 3Power is built for the storage service providers. So a lot of the stuff which we talked about and that was the genesis of the service provider. I said today that I have the 3Power red herring in my office and I go back and read it every now and then. It's all about utility. Absolutely, but if you look at the characteristics and it was built for the service provider and if you look at the characteristics of what people were looking for the service provider was they want to have that thousand customers, 2,000 customers, 10,000 customers all going to the same array, right? Then they wanted high IOPS at the time. They also wanted low latency at the time. They also wanted guaranteed and sustained latency, right? And so if you look at all the characteristics with the storage service provider or the SSPs wanted, they are the same characteristics which flash as a media requires. So what happened was from the design standpoint and the day one designs of the 3Power standpoint it was always optimized for. So we released a feature called the priority optimization, right? You can guarantee IO bandwidth and latency to your multi-tenant, to each of your tenants, right? And it was already built out of multi-tenant architecture. So this was a ground up design for how the IT was going to be in future, right? Whether it was a virtual server, whether it was a physical server or whether it was guarantees throughput or the latency which is required from the stack. So which is why it was very much more easier for us to adapt to flash because flash as a media is more or less an adjacent requirement to what the service providers always needed. So which is why this architecture, the work we had to do was an incremental work to get the flash as a media. And most of the work had been around we didn't have to change the core mesh architecture or anything, what we had to do was just optimize the standard protocol stacks to make sure that they don't spend time waiting for IO to be sent out to the media and which is why we got the numbers which we got. Now, there are to your point about the scale-out architectures, right? There was the, there were- I'll try to get a question, don't worry. You're on a roll. So the scale-out architectures are one of the problems when you haven't shared nothing architecture. I said that the three-partisan shared all architecture. In the shared nothing architecture, every time there is a shared nothing architecture, there is a trade-off which folks necessarily don't realize is the shared nothing architectures have to do some kind of data protection scheme. And generally that data protection theme is a replication to a different node or something. So every time you do that, right? So you are reducing your latency by half. You are getting your capacity utilization. You're suffering your- Make copies. Making copies, right? Your capacity utilization went up the door. Your latency is now half. Your bandwidth has been shrunk. And if there are some scale-out architectures, actually keep three copies of the data. So all of those numbers, latency, bandwidth, and the throughput just gets worse two times or the three times. And not to mention the rich services which we talked about. So it's an inherent disadvantage. The pure scale-out non-shared or shared nothing scale-out architectures have. Okay, well we're running out of time but I've got one question in. I think that's a good conversation. I don't want to interrupt that. But I wrote a post yesterday in Forbes. Dave Donatelli and Craig Nunez in the headline. I couldn't get David Scott. He was just so busy. He's been busy all week meeting with customers. I asked him the question about why is software defined storage so hot? Donatelli's comment was, storage is an operating system now. So, and then he went on and on. So one of the comments. Go to Forbes.com and search on HP if you're interested in reading that post. But I want to ask you in short to end this segment, why is software defined storage so hot? And what does Donatelli mean about storage and operating system now? Sure, so the folks who are the, say let's say a first time sand buyers. Folks who have outgrown their director at storage but not yet ready to make and hold some investment into and full sand fabric. Or folks who have been using who want to have a backup strategy but not yet ready to go to and dedupe appliances and everything. There is a transition path for them where they can use their internal disks and be the first time sand buyers or be the first time dedupe buyers. And they eventually grow eventually grow from there into an external attached storage and external attached dedupe appliances. So what software defined storage provides is a bridge for folks who are transitioning or not only the enterprises which are transitioning but also small and medium business which are transitioning from and direct attached model to the external storage model. So that bridge never existed in the past. And that's what the store virtual VSA and the store once VSA provides them to transition from the internal storage to the external storage. And that's why software defined data center is hot. And Craig Nunez said he's going to come on theCUBE shortly today and I think he's coming down today but he saw you take back it's a whole new paradigm shift. It's a whole new rethinking. So software defined storage, there it is big announcement yesterday we're from the HP storage three part continuing to lead the transformation with great products and performance and the business side are very profitable growing with it under the HP Donna Deli group in the enterprise. We'll be back with more coverage here shortly after this break. This is live from HP Discover the SiliconANGLE with you on theCUBE right back.