 It's the Cube. Now, here's your host, Stu Miniman. Hi, this is Stu Miniman with Wikibon with a special presentation of the Cube here at the Ribbon Cutting at Infinidat's new briefing center in Waltham, Massachusetts. Excited to have with me Brian Carmody, who's the CTO of Infinidat. Brian, thanks for joining me. Hey, Stuart, how you doing? I'm doing great. So we've talked to some of your team here, got on the inside, so we're outside, but we're gonna be digging into some of the innards of what's going on in the industry. So Brian, not much has been going on in the storage industry. Let me see. In the last month, we had the finalization of the largest acquisition slash merger in the industry of technology with Dell Buying EMC and Nutanix, just IPO'd. So when we've got the CTO, we always wanna dig in. What's in your head? What are the big mega trends that you're seeing and how's that impact what you're doing? Yeah, so this was, obviously, it's been a crazy era of innovation for us. I mean, even just looking back at the past two years, you know, 2016, or let's say 2015 was kind of the year that storage got fast. It was the year that Nan Flash hit the knee of the adoption curve and every marketing person everywhere was hashtag AFA. And then 2016 was kind of the year we think that storage got commoditized. This was when software defined and hyper-converged technologies kind of hit the knee of the adoption curve. It was capped, just like 2015 was capped by the Pure IPO, 2016 was capped by the Nutanix IPO. So I think it's a really interesting question of what comes next? Like what's the next mountain that we're gonna climb as an industry? And we're hearing really interesting things from customers about what their priorities and what their challenges are. So first off, going into 2017, one of the really interesting phenomenon is that the requirements from traditional enterprise, let's say a CTO of a bank that's building a next generation data center versus a mega cloud provider, those requirements are starting to converge. So this idea that the cloud is one thing and enterprise is something else, I think we're starting to kind of move past that. Obviously, there's a huge trend still going on for compute heavy workloads to move off-premise into public clouds. And kind of a net flow of storage heavy workloads tend to move on-premise. But I think we're at the point now where we're kind of reaching an equilibrium point between those two. So that is certainly one trend. Another thing that we're hearing very much about is the rise of new KPIs for measuring IT systems. Acquisition cost is still a big piece of the equation, but new technologies or new metrics like power, space and cooling are becoming very critically important because with the transition to the cloud, even CIOs of very large Fortune 500 companies, their computations are often happening for the first time now in spaces where they are a tenant in someone else's kit. They are renting space or renting capacity. So all of this is putting a lot of pressure on systems designers to really focus on density of storage and density of computation. And we see that this is contributing to the rise of a new class of storage technologies called hyper-storage systems, which are designed to meet those goals. All right, so Brian, I've tried to create different market categories before, when I joined Wikibon, it was hyperscale, invades the enterprise. When people, before they were talking about hyper-converged as much, we talked about, we called it server-san actually, because it was the benefits of a sand brought to the server, but so you've got that term hyper in there, is that hyperscales, is it hyper-converged, is it some other hyperness, what's the general idea you're trying to get with this new category? So let's take a look at kind of the existing commercially available technologies, and it's kind of interesting to look at it and think about it on a two-dimensional axis of looking at the density and then the latency. So you have the, for example, the traditional monoliths. These are latency that's low enough for primary storage. They do not tend to be very dense, they're under a petabyte of storage per rack, and that's where the industry began, that's what a lot of us kind of cut our teeth on. They're being superseded by all flash arrays. These are higher density, because they have data reduction technologies built in natively into their data paths. They have better latency, so they're kind of moving up and to the right with respect to the monoliths, but they come at a price and they tend to be exceedingly expensive and relatively small systems. Then you have the SDS and the server storage and the scale out stuff. They tend to be very close to the density of the monoliths, again, a half a petabyte to a petabyte per rack, a regression on latency, but they're being widely adopted because the costs are just so much better than the monoliths and the AFA's. And that's the entire enterprise storage industry right in this area here. Now, all the way down, if you move to higher density, you have systems like the Facebook open vault. So this is an awesome open source storage system that Facebook developed at the basis of Haystack and F4, two of the largest storage systems in the world right now. These are incredibly dense storage systems, multiple petabytes per rack, but they're very high latency. They're used for cold data only. And other things like Amazon Glacier and whatnot are kind of all clustered down in that high latency, but super dense. So hyper storage is, if you move around that two-dimensional chart is the upper right-hand quadrant. It is storage technology that has the reliability of monoliths. It has the cost structure, the programmability, and the ability to run on any type of hardware that the SDS systems have. But it has the density and the data center profile of the hyperscaler storage. So this is completely uncharted territory. This is where all of the R&D spend at companies like Google and Amazon, everybody's racing to try to go figure this out. And this is the kind of Wild West where we operate. We have a three-year headstart on the on-prem part of this, but it's not gonna last, because this is the remaining uncharted territory in the industry. Really interesting. So you've heard it first, hyper storage, definitely something I've been hearing for a number of years is especially the big financial guys have been, they've had hyperscale envy is really what it was. They're like, we spend huge amounts of budgets on IT. We know our stuff. How dare a bunch of retail guys basically come in here and think that they understand this space. So it sounds like you're bringing some of that back to them. Is Infinidat the only ones? You mentioned some of the Google and Facebook. Are there anybody else that's packaging this for the enterprise other than Infinidat? Not yet. Some of our competitors that are building their systems out of all flash technologies are moving in the right direction, but their media itself has to become a lot more dense. And the cost has to drop significantly until they can be realistic plays. And until then, it's gonna be differentiated by scale. When you wanna do petabyte scale stuff, you do it with hyper storage architectures. And when you wanna be small and tight and fast, you do it with AFAs. But I think those lines will be blurred over time. Great. So it sounds like you've got a clear differentiation compared to just the software-defined pieces don't deliver on some of the density that you're talking about or some of the high-level performance. When we start getting things like NVME, I hear it's gonna be a game changer for the hyperscale pieces. Do we start to see the blurring of the lines between some of these architectures or is this something that two years from now you're gonna say, okay, here's the last wave and here's the next wave? Yeah, so I mean, if you take a longer view and instead of two years, I mean, if we look at a five and a 10-year view, this is all there is going forward. There's hyperscale architectures or disaggregated architectures, which are for compute heavy storage light workloads. And then you have hyper storage, which is for storage heavy compute neutral workloads. And going forward, if you take a 10-year window, that's all there is out there. Well, Brian, I'm hoping we can do a whiteboard with you sometime in the future or maybe there'll be some kind of thesis on the kind of the hyperscale, the hyper storage category. But I appreciate you here sharing it with our audience here. I want to give you the final word as to kind of the hard work that still needs to be done in the storage industry over the next few years. Go, Patriots. All right, well, we'll drop the mic there. Thank you, Brian, so much for joining us. And thank you for watching theCUBE.