 Live from San Francisco, California, it's theCUBE at VMworld 2014, brought to you by VMware, Cisco, EMC, HP, and Nutanix. Now, here are your hosts, John Furrier and Dave Vellante. Okay, welcome back. And we're here live in San Francisco for VMworld 2014. It's our fifth year excited to be here at theCUBE, extracting the signal and the noise. I'm John Furrier. My name is Dave Vellante and it's a CTO and principal SC of Extreme I.O. How do you say your last name and give us the Hebrew? Like. Like, okay. So, you know, it's the east coast accent. I can never get that right. But welcome back to theCUBE. Thank you for having me. You guys know the deal right now. Obviously, we are totally interested in Flash. The market is as well. Give us a quick update of what's going on with Extreme I.O. in the field right now. Obviously, indirect availability is pretty hot product. You need the line around the corner to buy it. It's like a bread line pretty much out in, you know, starving, you know, converging infrastructure land. So what's going on now? Give us the update. What's happening? Right, that's a big question I'll try to answer in a few sections, so. Don't worry about the bread lines. The lines are like. I love the analogy, John. So, as you know, the DA availability phase was in 2013 and we went here in November 24. And it was absolutely booming. I mean, we were still fairly small in terms of R&D and the SCs at the field. And the product has been going absolutely amazing in terms of different use cases. But back then, back in 2013, and even in the beginning of 2014, we called it phase one. Phase one was for use cases that were obvious for an old Flash array, like performance. Let's say database. Customers, the database needs, they need to reduce the latency. They need to increase the performance and you need to increase the bandwidth. They considered that everything to Extreme I.O. right? That was the most obvious use case, phase one. And it also from a low-ing in a fruit perspective, that was one isolated case that didn't just move their entire workload to Extreme I.O. Then in the beginning of 2013 came phase, what we call phase two, phase two mean data services. So for use cases like VDI, where you enjoy the data reduction technologies like deduplication, or for the database workload, things like Snapshot that we announced that are already GA, right? So you have a database and you now want to clone or Snapshot the same database 250 times without impacting the source volume performance. That's something that wasn't, you weren't able to do it in the old days because that was always impact the source because it was spinning. It was spinning, the engine utilization was actually suffering very much as well. So it wasn't just the drives and it was also about the cache. So the caching layer on traditional arrays was suffering big time. So you had to add more engine just to encompass for the fact of adding more Snapshots. So customers fix this problem in traditional arrays by moving to clones. But clones as you know require different spindle. Different spindle require different flow space, a lot of powering, a lot of cooling, a lot of money, right? The box. The box, more box, more, 70 boxes more. Good for business. Good for business, absolutely good for business. But not sustainable. Not sustainable. So that was phase two. And right now with the launch of version three we are in what we call phase three. Phase three is going all in for Extreme I.O. I have a session on Wednesday and I'm actually going to speak about the customer that was using Extreme I.O. for phase one and phase two. It's a financial customer in EMEA that was using Extreme I.O. for database consolidation, performance and Snapshots. And now they decided to move their entire virtualized workload, everything, including everything to Extreme I.O. Across two sites with Vplex in between so that will make the entire solution an active, active solution. And the road was very easy. They just wasn't sure about the TCO in terms of dollar per capacity, dollar per gigabyte as opposed to dollar per performance. So together with Mytrend we developed a free tool that know to analysis your existing environment, whether it's physical or virtual. And it can tell you from a capacity point of view, taking into an effect our deduplication, M-Think provisioning saving and the compression savings that we just announced with version three, how much capacity it's going to require on Extreme I.O. And you know what? Just ignore performance and ease of management and the requirement to not have different volumes with different rate structure. Just in terms of TCO, dollar per gigabyte, it was actually cheaper than a hybrid array. So that was a no-brainer question for me. So we're there, we're there. We're there. We're absolutely already there. Flash is less expensive than traditional arrays. Correct. That had obviously certain segments of the market. Correct, not for every workload. I would say that some workloads are still not there. Obviously archiving, you're not going to use Extreme I.O. for data domain. But for things like maybe Microsoft Exchange, that may not be the best solution. You will still require an hybrid array, but for pretty much everything else that is out there, it's already there. So I ask all the folks out there the same question to the Flash guys. I mean, Flash is obviously great. We're the big fans of it. But there's different levels of kind of storage needs. There's latency, real time, transactional, throughput, and cost per terabyte, per petabyte, gigabyte, whatever it costs, basically cost per medium. So we've got to ask you the question. Can you do all three? Can you serve all masses in a Flash environment? And is there a notion of capacity Flash versus performance Flash? Because that seems to be a conversation we were having earlier this morning with folks in cloud, is that there's different versions of Flash. Certainly the economics are awesome, very ridiculously cheap and inexpensive, if you will. Capacity and performance Flash. Is there a difference in real time or transactional throughput cost per gigabyte? Yeah, that's a very good question, actually, because people in many cases, and I guess I should be one of them as well, will preach an all-Flash array for every use case under the sun, but that's not necessarily the case. I mean, for example, you mentioned performance and bandwidth. I will also take it a step further and speak about data services. Maybe you just need performance and that's it. And for this we have an absolutely good product called a VNXF, right? It provides you the best TCO dollar per gigabyte per performance without the data services, without the duplication and the compression. And you know what, that's absolutely good because not every use case needs or can benefit from those data services. For example, databases. Databases do not have the duplication associated with them. You can only have compression with database but not the duplication, right? So if the customer just need a good Flash that in terms of bandwidth and performance and very low latency could cope with the deal, that's a VNXF use case, right? Which is actually going to provide a better TCO than Xtrema-O. On the other hand, if you are looking for other cases, right, that require compression, require duplication, and I would say probably the most important aspect is the scale-out mechanism, which none of those hybrid arrays, including our own hybrid arrays, than Xtrema-O is the right solution for you because then you have building block that scale with both the capacity and the performance. By the way, that's another point that I'm going to speak quite a lot about on Wednesday. Some vendors out there tells you that you just need two engines and that's good enough because nobody needs one million IOPS. Well, they need to speak to my customers because I'm actually at the field all day long and they need one million IOPS. Even if they don't need them today, they will need them in the future. I'm always using the same analogy of an iPhone, right? What's was wrong with iPhone 3GS? I mean, what device are you using? iPhone, of course. iPhone, but you're not using the iPhone 3GS, right? No. Why are you using iPhone 5 or iPhone 5S? Because the moment that Apple bring up a new device to the market, the developers immediately know to take advantage or to abuse the hardware and write and code new application that will take benefit and take advantage of the new hardware. And the same thing applies for Xtrema-O. We were talking before and about snapshots. So up until now, customers were using maybe two or two or three or two or four snapshots a day, but now they can use 20 or 200 snapshots a day, right? Because there is no penalty on the system itself. Well, my old iPhone was so much faster than my new iPhone because it wasn't doing anything. There you go. Well, we see the same thing in data centers. Exactly. When iPhone came out, it was locked only web-based application but now everybody can write application into the iTunes Web Store. And that's exactly the thing with Xtrema-O. That actually leads me to another point. We have a free plugin at EMC called VSI, Virtual Storage Integrator. It's a free plugin that sits on vCenter and gives you the capability to view information from your storage array, be provisioned volumes from your storage array. See in the context of Xtrema-O, it knows to take, let's say, a master image in the new case of VDI and clone it thousands of months of time. And we said, you know what? Why do we even want to mess with cloning although we support X-Copy and it's all done in metadata and memory and it's 10 times faster? Why should we settle for 10 times faster? We can use snapshots, which are immediate, right? And we can make it 1,000 times faster. So now customers are actually using this plugin that was originally meant to be used only for VDI, for snapshot-based VDI VMs and they're actually snapshoting their server, virtual server workload and taking advantage of the system itself. That's again, a thing that, you know, a customer actually educate us about the potential of their array itself. I want to talk a little bit more about snapshots. So we had Avishek on before and he was saying, you know, the whole architecture matters, you guys are big on that and I'm starting to believe you really believe that and so I'm looking for more proof points. But so he basically, he chose, I said, let's give me an example. And of course he chose the inline Ddupe, which is you guys have used a lot. And he said there are many, many others. Snapshots seem to be something that is interesting for customers. They're finding new ways to use snapshots. So I wonder if we could talk about that a little bit from two perspectives. One is the architecture because Avishek's point was no, no, we've designed this feature set specifically for Flash. And I said, oh, you're a competitor that can take the same thing. He goes, well, let me give you an example. And they are saying the same thing. So he took inline Ddupe as one and he said, many, many others. So I wonder if we could talk about snapshots. How is the implementation of snapshots different for the all Flash array for extreme IO than it would have been for, let's say legacy systems and maybe competitive systems. And how are customers using snapshots in new ways? You've touched on though, but I really want to think into this one. Yeah, so that's a great question. So first let's talk about architecture from an eye level. So let me tell you a secret there. So for us, snapshots are just a volume. We don't have a concept of a snapshot. Yes, you can create a snapshot in the system, but it's just like any other volume. It will look like a camera. That's what makes it a snapshot. Yeah. For us, snapshot really means that it's the same data as the original volumes. And because it's like this, because of our content addressable storage architecture itself, when you create a snapshot, you create a volume and you don't consume any capacity on the array itself. And you do not also consume CPU cycle from the array itself. That in itself is a huge value because on traditional arrays, every snapshot are tightly attached to the volume of the parental volume itself. That kills the CPU utilization, the engine, right? And that's causing customers to either buy more engines or more controllers on their- Press the pay. I'll read it. Exactly. Or just forget about it because they don't want to affect their parental source volume performance and go with clones, right? But clones require different spindles. Different spindles require different, much more power, and which of course, eventually will lead the customer to say, you know what? I'm only going to have maybe four clones a day. That's it, because that's what I can sustain without killing my source volume performance. What we are now seeing is that the question is not, how many questions can we deliver, but rather how many snapshots do you as a database administrator, for example, want to have? So for example, we have a customer in Scotland, a financial customer, that's now running hundreds of snapshots almost every day because there is no penalty on the array itself. Another use case will be actually virtualize environments. One of our companies that we actually allow to use is Amdox as a reference customer. You know Amdox, right? Sure, yeah, of course. They deliver service for their billing- Local company for you, right? Local company for me, correct? So their use case was, we are using VCloud Director from VMware. We are provisioning a Vapp, which is a logical container that contain many VMs running the same business logical entity, let's say SAP, for example. And every change that we are uploading to the system needs to be represented as a new Vapp, right? That will kill an hybrid array, or should I say a traditional array, because every one of these clones will consume more power from the CPUs, more capacity from the array itself. With us, they don't pay for it at all. So they are now actually doing a clone for every code revision as opposed to a cumulative set of, let's say, a seven code changes every week. That makes sense. So now they can actually roll back to a very specific point in time of this particular snapshot, right? That's one thing. The other thing that is interesting for us in terms of the snapshot is that one of the traditional problem with traditional snapshot is that the hierarchy. If you create one or two snapshots, that's okay, but if you create, let's say, 60 snapshots, and you are querying data that reside on the parental volume, the traveling, the seeking time, even on Flash, will take a lot of time, right? To find its way from snapshot number 60 to the parental volume, which is number one. For us, we have a special mapping that reduce this trip even further. So there is no seeking time. I mean, forget about the seeking time of the non-magnetic device, which is Flash, but also the seeking times in terms of the software itself, right? So that's another benefit. And you're keeping track of all these snapshots in your system. You've got the, you have a catalog to do that? We have a special table just for snapshots, yeah. Interesting. Yeah, very. So what's the disruptive technology coming in Flash? What's next? What's the big deal next? It, I think, again, it's a good question, but it really depends who you ask, right? For me personally, my background is not even storage. I'm a VMware admin, I've been a VMware admin for, I guess, the past 12 years. So you see two tectonic shifts, right? Software defined, things like Vsans, Scale.io, Nutanix, the SimpliVity, like everything is condensed, hyperconverged. But those will fairly come with our data services and with distributed algorithm, right? And on the other end, you see customers who still want to buy a box, even if it's an old Flash array. By the way, there are many good reasons to buy a box that are not necessarily technical for example, responsibility. The box is not going away, I mean, Nutanix proved that. Correct. I'm just talking from a warranty point of view, right? If I'm buying a server from one manufacturer, the software from another and something breaks, who do I call to? But we do start to see maybe a collision into each other. So maybe a V sign that will be based around old Flash array or things of that nature. And that to me is the real future, maybe the combination between the two. That's one angle to look at it. It's great to have you on theCUBE, appreciate the impact of Flash on virtual servers, databases, virtual workloads, that's your session. Thanks for sharing theCUBE here. Thanks for having me. Getting all the data, find out who's got what in the converged, hyperconverged end user computing journey to the private cloud, we're all here. It's theCUBE, getting the data, sharing that with you. We'll be right back after this short break. Thank you very much. Bye. It's okay.