 Live from Las Vegas, Nevada, it's the Cube at IBM Edge 2014. Brought to you by IBM. Now here are your hosts, John Furrier and Dave Vellante. Welcome back to Las Vegas everybody. This is day two, we're deep into day two, winding down and we've been talking to customers all week, practitioners, technologists, talking about IBM's portfolio, having a software-defined Kool-Aid injection. This month actually, it's been software-defined month, it feels like, a lot on cloud. And we're going to talk high-end here. We haven't talked too much, we talked to some customers about XIV, but we're going to talk about the high-end a little bit. Sidney Chow is here, he's the vice president of the high-end diss systems at IBM and he's joined by Ori Bauer, who's the director of XIV, Worldwide Development. Gentlemen, welcome to the Cube, thanks for coming on. Thank you for hosting us. So Sidney, I want to start with you. Talk a little bit about what high-end means. When I first met the guys from XIV, we were sort of debating, is it tier one, is it not tier one, it's all of those above. So what is high-end? So for high-end storage, right, the most important thing is resiliency, data availability. If you can't have access to your data, and if you lose your data, all bets off. So in this highly connected 24x7, no downtime kind of a world, right, you have to have access to your data at all times. So that's number one for high-end. Number two is you have to have the performance. You have to have the performance that gives you the low latency and high IOPS to handle the workload, and most importantly, you've got to be able to do this with unpredictable demands that's coming in and out. So those two things are absolutely critical when you think about the high-end. Any tier one high-end storage device needs to meet at least these two minimum criteria. And if you think about what's happening in the IBM storage portfolio, XIV and DS8000 are both very good examples of tier one storage. So another big differentiator between DS8000 and XIV is mainframe attach, right? But a lot of people suspected when IBM bought XIV, they said, oh, they're just going to replace DS8000 with XIV. That didn't happen. Why not? So one of the things that you got to look at is the amount of work and effort in history that went into the DS8000 stack, not only in the mainframe attach piece, but also in the replication and the three-side, the four-side, the five-side, and all of the tunability of DS8000 that can get you down to sub-node second performance, and especially flash now, we can get down to a couple hundred or less microseconds of performance. That tunability is critical for the very exact workloads, be it on mainframe or on the very high-end heavy oil TP kind of a transactions, right? So that DS8000 definitely is designed and engineered for those ultra-high performance, ultra-high availability scenarios. Now there is a very large segment, right, that has, you know, that XIV serves for the open storage environment, especially when you're talking about the VMware environments, you're talking about some of the power, AIX environments, and the Linux environments. Those are the environments where it actually comprises most of the market, and what customers want is something that's just plug-in and forget it, don't want to deal with it, don't want to tune it, right? I just want to plug in and let the system automate everything, and that's where, you know, XIV really, really excels. So XIV is sort of the tier one class for what people used to call open systems, remember? It's really non-mainframe and the fat middle, if you will. And we were very blessed to have, you know, two products that really complement each other in the market that serves different parts of the market. Yeah, I mean, there is some overlap there, but as Joe Tucci said, it's better to have an overlap than big, giant gaps that you can drive a truck through, so, all right, so let's talk about XIV. You guys have made some enhancements recently. What are those? So as Sidney mentioned earlier, we are focused heavily on cloud storage right now. XIV is positioned as the storage for cloud, and what we have done through the recent updates and releases that we developed was to enhance those capabilities so we can better service those type of workloads now. First, before we go into the new things that we developed, when you look at the cloud environment and the storage for cloud, there are some fundamental requirements that you must meet. For example, it must be extremely easy to manage and use to an extent that it can be completely automated without any human intervention. That's kind of the end of the scale. Google Nevada. Yeah, yeah, exactly. So for that, you know, in order to be able to get there, then you need to be able to provide consistent performance no matter what type of workload you throw into the box, you want it to continue to perform extremely fast and extremely well with no hot spots and with no, you know, without any human intervention, doing all sorts of tuning and worrying about rate groups and so on and so forth. And the XIV architecture, in essence, the fact that we partitioned the data and we distributed across all the nodes in the system and across all the hard drives or flash SSDs in the system allows us to provide consistent performance no matter which type of workloads and you guys posted just a nice blog about what is required for cloud storage and one of the problems that you discussed, there is the noisy neighbor where a specific workload can all of a sudden peek in demand for IOPS or bandwidth in XIV because all the, you know, all the IOPS basically spread across the different drives in the system and the different nodes, then it, you know, this problem is literally nonexistent. So this is one element. Hyper-spreading is obviously the fundamental architectural premise of XIV. I agree. I agree. So this is one element of being able to go to for storage to be cloud storage. The second part is the integration into the orchestration layer, be it VMware or OpenStack or just a homegrown orchestration layer. And here again, this is where we, you know, focus on XIV. We have very tight integration with VMware and you asked about you things. So one of the things that we are, you know, previewing here in the show is the integration to the vCloud, the VMware vCloud suite and this is something that we are demoing here in the solution center. And again, we are working very closely with VMware doing continuous advancements so that we can hook up nicely and let the orchestration layer, if it is VMware, manage the storage in an easy manner, same with OpenStack. We are one of the first storage arrays to implement OpenStack. We have, you know, eBay as a customer who has been using this for over a year now and then for those who have their own orchestration layer, we have the RESTful API that can, you know, just provide direct access to all the API that, you know, we expose. Now some additional requirements on cloud storage would be security related. So things like encryption. We have data trust encryption on XIV that allows you, basically the data gets encrypted as you store it into the storage but you can decide when you want to enable encryption. So you can keep the storage unencrypted and then as soon as you feel that, you know, you need to implement encryption, you just hit a button and the data gets encrypted. It gets encrypted as, you know, as data at risk and it has no performance impact or no, again, no tuning or no special requirements, it just works. And one last element in terms of the security that, again, we are just previewing right now is the multi-tenancy capability. And here again, it's the idea is to allow delegating some of the storage administration tasks to specific organization, you know, specific parts within the organization. So be it a department or, you know, if it's a managed service provider, it can be separate customers using the same XIV storage and each one can see a view of his own part of the part of the storage and manage it by himself so he can provision new volumes and just manage his section of the XIV by himself. So that's another important enhancement that we are just previewing right now. And one last element that I'll talk about that is also helpful here is the quality of service. So the quality of service basically lets you offer storage as a service. And when I say storage as a service, I differentiate that from storage as capacity. So if you sell internally within the organization or to external clients, storage as a service, it means it comes with a certain level of SLA, service level, and what the quality of service allows us is to limit specific, initially it was specific hosts to a certain level of IOPS or bandwidth so that you can, you know, decide on different tiers or different SLAs in essence to different type of clients. What we are doing right now with the multi-tenancy capability, we are tying the quality of service to pools which allows you to then who pulls into a specific domain and then you can, for specific customers, decide, you know, how much IOPS can they consume out of the system and if they go above that, we just limit that. So this can, you know, serve for them. Okay, good. Make sense. I want to come back to quality of service before I do the sitting, I want to just clarify something. So tier zero, so-called tier zero, is part of your portfolio or no? Tier zero, we're assuming you mean all-flash kind of, all-flash array kind of system. Is that considered high-end disk? It is high-end and the Mycune owns that part of the portfolio. Right, okay. So essentially Mycune and everybody else in the industry wants to eat your install base. Right, so now at the same time you've got robust stacks, they're very mature. You can add flash into your systems to, you know, extend the life of who knows how long. Where's flash play? Okay, so we just made an announcement last week on the DS8000 family, DS8870, where we introduced the new, what we call high-performance flash enclosure. Okay, it is a grounds-up design of flash that breaks away all of the bottlenecks as in the system. As you all know, you know, the first step of flash, you know, DS8000 was the one of the first ones to introduce SSDs back in 2009 or 2008 time, okay. And that was simply to replace a disk with an SSD. He had the disk a lot faster, but the back and bottleneck is still there because everything was designed over the many decades really for a certain speed of disk, right. But what we have done with introduction of the high-performance flash enclosure on DS8000 is to break through all of that. DS8000 flash enclosure is up to four times faster than the SSD version. Now, how do we get four times faster? Because we're now bypassing all the other controllers, go directly to the power controller on the PCI bus. So you just bypass all the bottlenecks and that's why it's designed for the ground-up to handle the very high-intensive order. So it's really the best marriage of flash performance with the ultra-high availability of the co-stacks that's already DS8000. So a lot of things you can do to extend the life of the high-end. I liken it to a big brother and a little brother, right. A little brother's younger, you know, he's got the swagger, thinks he's stronger. But the big brother has the moves. He knows the judo moves and the techniques. But also, right, over time, you're going to see a lot more of the flash systems technology, right, and piece parts inside other short systems. So when we bought Texas Memory Systems, the flash systems base, we didn't just buy a product. We just, we bought the company, the technology, and we have every intention of taking that technology and embed it into our other short systems. That's where you get the real return on that investment. And where do, where does the high-end, I think I know the answer to this, but fit into the whole software defined strategy? I think it plugs in, essentially, and provides data services. Is that right? Is that a fair way to think about it? Yes, so the high-end, if you think about the orchestration layer and the data layer, right, so the products that we have are absolutely superb at the data layer and we're adding the orchestration layer with all the tight integrations that we do with VMware on XIV, for example, and with all of the orchestration layer that we have above the stack. Okay, I want to talk a little bit about quality of service. I said I was coming back to that. So quality of service, a lot of people say, okay, we got quality of service. So the way we like to look at it at Wikibon and my colleague, David Floyer, sort of educated me on this is, let me ask some questions. I presume I can provision capacity, let's say 100 gigabyte volume and get a guaranteed IOPS, pin it to the application, say 1,000 IOPS. And is it true you can do bandwidth as well? So what we do is IOPS and bandwidth and what we allow doing is limiting, providing an upper limit. Yeah. Okay, so we're not guaranteeing, but we're providing an upper limit. So capacity as well, right? Yeah, capacity is what you provision. So you provide a ceiling. You won't go over X, but can I guarantee X or no? No, what we provide is the ceiling. I mean, the top, the maximum number of IOPS that you can get to. But you can't do a bit. You can do a minute, right? Okay, so is that roadmap stuff? Is that something you're working on? I mean, I would think from a cloud service part of simple, that would be important, right? Because I can monetize that. I can say, okay, I can guarantee you X and I can charge you for it. Right, and it relates actually to the discussion that you started to have with Sydney around flash and the additional capabilities. As with the introduction of additional flash into the arrays, it becomes easier to also provide minimum capability because with flash. And I can do that through an API call, presumably? Yes, yes, this is all controlled. And I can change that policy on the fly? Yes, you can dynamically change it. And again, provide different tenants with different levels of quality of service. Okay, we got to leave it there, gentlemen. Thank you very much for coming to theCUBE. It was a pleasure having you. Thank you. All right, keep it right there, everybody. We'll be right back with John Furrier to wrap. This is Edge, this is theCUBE. We're live from Las Vegas, we'll be right back.