 Okay, we're back live inside theCUBE here at the IBM Edge 2012 live in Orlando, Barry Cohen, the CTO of the Edison Group. Welcome to theCUBE. Thank you. You guys are a research firm, do a lot of hands-on testing. Yes, you work with IBM, the big guys, HP, EMCO, and all the above. So tell us one, what have you been working on with IBM and share with us some of the recent work you just produced? Well, most recently this year, we've been working with V7000 Group on two aspects of the platform. The first part of the year, we've been looking at manageability of the platform and we've produced a white paper comparing the ease of management with V7000, with EMC and NetApp storage systems, showing the great ease of management, ease of use that the V7000 offers. And I know they were talking this morning about it in the presentation yesterday about the ease of use and the manageability is significantly superior to most of the storage platforms on the market, especially in this market space. How do you measure ease of use and manageability? A few years ago we created with one of the other vendors a way of counting and comparing complexity. Okay. And basically, when you perform a task on a computer, it's a series of things you do that you generally call steps, but the interfaces vary, some have wizards, some have a tree-shaped view, others have a command line or a menu-based view. But the task itself is do this, do this, do this, do this, so we've came up with a way of counting that with calibrating mouse clicks and things like that that seems to work for almost any context. We know that that's the case because we've done the same research for Oracle and IBM and HP showing for each of their products how they are manageable and it's a good tool that gives, I like to call it a pseudo-metric because it's still subjective, but it does give a way of comparing within its context that works. So when you think across all technologies and you look at usability index, who's the rock star? Other than Apple, you mean? Well, you can throw Apple in. I wanna know what you think. In enterprise computing, it depends on what we're talking about. On storage right now, it's IBM and followed by Sun with the Oracle, the ZFS platform, and I'm not familiar enough with a couple of the others that are emerging to give a good judgment on them as far as manageability is concerned. The database market, which is where we started with this, has changed so much that our original star is less of a star now and everybody's becoming more complicated rather than less, so I don't know where that's coming from. So you've got, is there technology areas that are going in the wrong direction from a use of use perspective? Yeah, I think databases have a problem in that they're being called upon to do so many different things that no matter what they do, they're not going to be easy to manage right now. The major relational database companies are trying to also be big data, how to put database systems and things like that, so they're trying to do too many things at once, so can't possibly make that easy to do in the short term. Okay, but some people are trying to turn what were discrete systems into appliances, a collection of systems into appliances, and certainly that's a big part of IBM's initiative here. What's your, and you contrast that against buying best of breed solutions, right? Well, let's look at the things here which is vSouth 7000 as its role within the pure systems environment, which is appliance might not quite be the word because it's awful big appliance, but it is by taking a series of good or even best-in-class components and abstracting their physicality, you enable rapid deployment, lower cost, and often better management. You lower the cost of the system in general and you simplify IBM's pure systems environment integration on some of the other unified systems. I don't know what this category of computing should be called. Work pretty well within themselves, but they don't play nice with others. Vblock, for example, is really, if you don't have Vblock, you can't use it. If Flexpod, you don't. HP is moving toward the pure systems model but is really not there yet. So if you look at an appliance that way, it's the right direction because once everything is working, your main concern is running the workloads that device is supposed to be using and therefore an appliance model gets that complexity out of the way. When you look at, so you figured out a metric for measuring ease of use, have you figured out a metric for measuring the ROI on ease of use? Well, our method, we do a cost of ownership. We look at it point of view of management cost. In other words, if you can save administrative time by 10%, 20%, 30% in some of our studies, these 7,000 is about 30%. 30% savings in terms of management? Over other systems. It doesn't, it means your storage administrators have more time to do the more complex, difficult things than they would otherwise. They're more efficient. They can manage more storage devices. One of the things this type of metric requires to some extent is a waiting factor. What do people do? And what do they spend their most time on and what's more important to spend their time on which may not be the same thing. And so what we focus on is that combination. And if you can save 30% on the day-to-day, mundane maintenance test so that you can deploy more storage for VMware, or you can simplify in the pure systems model the deployment of workloads without even worrying whether it's on VMware or a Power VM or stored on this or stored on that, you've tremendously lowered the cost and you can get more efficiency out of your team. The, as you mentioned, VMware and virtualization, we just had John Twigel on recently just a little bit earlier and he was quite negative on virtualization in general unless it's mainframe virtualization, level of virtualization. What do you think is the state of the industry in terms of ease of use in terms of virtual VMware? Well, VMware does a very good job of simplifying the interface to the degree that it's influenced what IBM has done with Power VM. It's influenced what HP is doing with HPUX virtualization. It's influenced Microsoft and Hyper-V. Oh no, Hyper-V, yep. And dealing with managing ever more complex environments. When VMware was new some years ago, it was pretty simple. You had a server, you couldn't even cluster before you clustered it and you could do stuff, but now you have these complex, very large environments with thousands of virtual machines and it's more of a big deal and manageability is the big deal in virtualization. You look at it on the storage side. The storage appliances, the mid-range storage market, which is the prime storage environment for VMware and virtualized environments. Especially with unified storage, where you have block storage and file-based storage, which fits into how VMware works and even Power VM and the other types of virtualization. The model of managing the storage through the virtualized environment starts making sense. After all, if everything is virtualized, you don't care about hardware. What's a drive? It doesn't matter. What's a line? It doesn't matter. All that matters is the thing that that system is doing and all the, as an engineer, I do care about those things, but ultimately my job as an engineer, if I'm setting up a system for work, is that it does the job. If it doesn't do the job, I don't care how pretty it is, but if it does the job, the easier it is, the more work I get done in the day and the easier my life is. What do you think about the flash movement? Obviously, the theme here is flash everywhere. That's going to change some of your hands-on approach. Obviously, different configurations. Some on an extreme level will say the disruption is massive in the sense that it's a complete re-architecture change. Others are saying something different. Obviously, the elephant in the room has a different perspective depending on the view you look at. So what's your take on that? It's still pretty specialized. I know that we've been looking at some testing that shows that with, I can't really talk about that. It's not, we've been doing testing that shows that the new technologies that IBM is coming out with this week with flash storage has even better performance than could possibly be expected. So it's- We're hearing numbers from SAP and we're at Sapphire. And then, obviously, the suspects are Fusion and Violin. Two players have done these tests with SAP. Five minute page loads with apps of big data to five seconds. Five minutes to five seconds. That's massive. That's not like a little rounding error. It's pretty substantial. And that kind of testing is where the real value, I think, is going to come. That you can get ridiculously high IOPS from a database is nice, but it's really in this cloud app environment where response time to the application through a relatively slow network. Network, yeah. Is the critical factor. The performance on the back end of the storage starts helping because you've eliminated all the, the more of the bandwidth, restraints, performance restraints you've removed from the back. People understand that their 3G or 4G network connection is slower. So they're willing to cope with that a little bit. But still, for a while. Yeah, for another year or so. Do you see any on the technology side? Obviously, what are the top changes you're seeing? Because obviously in the marketplace, there's a lot of obvious advantages. The business models are somewhat changing. You know, storage is kind of wrapped in as an element of a bigger solution. But at the tech level, what are the top three things you're seeing? Well, first of all, because of one of the reasons I'm here today this week is talking about RTC, real-time compression from IBM, which actually works just like they say it does. We've done three or four sets of tests on it so far. And it works like they say it does and it makes the storage systems you're using more useful. We're talking about that VMware environment where you have to throw all sorts of workloads at a single storage system. Most organizations are not going to have 10 storage systems with different management setups for a VMware environment. They're going to sort of blend it all in together. And we've seen with the V7000 that you can save your capacity. You have huge capacity efficiencies because of the compression with no performance hit. That's a gigantic change. And something that people aren't really conscious of because compression on most mid-range storage arrays is not really very useful. I'll be talking about that tomorrow. I think you're absolutely right. I think compression is one of the areas that no one's talking about yet that's right around the corner because even in some of our research around the Hadoop community where huge batch is trying to meet more real-time and high availability, they're compressing everything. Facebook was telling me privately that they're running compression on all their bad stuff because of the network. The network problems are focusing on compression mindset because of the network level. Because you have to move less across the wire. Yeah. How do you think about compression versus deduplication? They're good buddies. They do slightly different things. Good buddies. Like brothers, cousins, like buddies. Cousins, they ain't enabled. No, they're very related. The underlying algorithms of compression and deduplication are very similar, pretty much the same in certain respects. The math is probably very similar. It's just how they work. Deduplication, you need lots of identical data to see an effect. Compression doesn't care about that. It just compresses. Matter of fact, in the research we've done with the RTC and data domain where you do deduplication of compressed data, you get even greater efficiencies and capacity utilization. That's a tremendous boon on that type of market. So do you dedupe and then you compress or you compress and then dedupe? Compress and then dedupe. Why? Mostly because of the architecture. The real-time compression appliance, the RTC-A, goes on the network between the servers and the data storage system. So that's why. And then there's the question that a lot of people don't talk about, which is encryption, right? So how do you apply encryption requirements to either deduplicated or compressed data? Can't do it right now. You just can't do it. Let's say in a practical general market, it can't be done because the algorithms that do compression and decompression have to be, and deduplication, have to be able to see into the files and by definition encryption is preventing you from seeing into the files. I would think there are certain organizations in the United States and elsewhere for whom building a system that could see inside that encryption might be possible. But our hosts here today and all the other folks who might build that will never talk about whether they actually have that. So we'll never know if that works. Okay. Barry Cohen, CTO of the Edison Group. Thanks for coming on theCUBE. We appreciate it. Great work, cutting-edge work. I love the angle on compression and the cousins or friends comment was good. Best friends. Best friends, good job. Thanks so much. And we'll be right back with the wrap up here on day two right after the short break. Great, thank you. Thank you.