 Live from Las Vegas, it's theCUBE. Covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. Hey, welcome back to the Sands, everyone. John Wall is here along with Keith Townsend and we are at Dell Technologies World. Day one of three days of coverage here on theCUBE. Keith, good to see you, sir. It's been a while, huh? It has been about six months. Where have we been? And you've got that going on. You look so distinguished and professorial. You know what? You make up for the lack of hair. I appreciate that you know. Well, it looks good. There are two guests with us talking to you about Xtremeio. We have Shadamai Mondo, who is a vice president or rather director of marketing. I'll give you a promotion. Get a promotion. Can I get one too? Director of P&P. Just like that at Dell. And Candy O'Mara is a solutions architect at VMware. I'm sorry, no promotion candy. Oh, no. That's the way it goes. So, Shadamai, if you would, before we get started, let's jump in and let's talk about Xtremeio a little bit. Tell the viewers at home a little bit about the product and then we'll get into VMware's use of it and how that's taken shape. Sure. So, Xtremeio is the purpose-built market-leading all-flash array. It's built on unique content-aware, metadata-centric, multi-controller architecture coupled with intelligent software that helps us deliver very high performance ranging from hundreds of thousands to millions of IOPS with consistently low submillisecond latency irrespective of what the system load is, how much data has been written to the array or whatever the workload characteristics are. Now, this metadata-centric architecture lends itself to a lot of other benefits. For example, we do inline all the time data reduction on the data path, and that leads to not only very high storage efficiencies, but also, since we do not write anything that's not unique down to the SSDs, it gives much more longevity to the SSDs themselves driving down costs. The hard thing is, it's pretty simple to use. Candy probably is going to get it. And probably from a customer perspective, right? I mean, that's the huge value. Yes, it's pretty simple to deploy. We have an intelligent HTML5-based UI that's consumer-grade, easy to use, at the same time providing all the enterprise functionalities that you will expect. The fourth thing I've mentioned is integrated copy data management. So because this is a extremely high-performance all-flash array, it is expected to do great in well-tipped environments, virtualized environments, but on top of it, the way it is architected because of this always-in-memory metadata architecture, the copies are literally as good as production volumes, so it's not just for protection, you can actually use the copies to run workloads on them and you get the same performance, same inline all the time data surfaces on the copies and you cannot really figure out any difference between production volume and a copy volume. So that leads into a lot of business benefits in terms of consolidating various copies and changing the application workflows. So Shandomai, we'll dig into that in a second with the inline DDo with copy data management, but first let's bring it up higher in the stack. Candy, amazing performance numbers out of Extreme IO, but the all-flash market is extremely crowded market. For the average end user, as you engage customers and you come to them, VMware runs VMware or Dell Technologies, Michael Dell, I can say, Dell Technologies runs best on Dell Technologies. How do you help customers, even when you look at the Dell Technologies portfolio when you have all-flash vSAN, you have Icelon, you have Icelon with flash, you have all these solutions, how do you help them navigate the broad portfolio and then come to, there was some typical use cases for an Extreme IO? Right, so in our, for our instance, is what we have, the first implementation of Extreme IO we have done was with SAP HANA. Now that's an in-flash memory database. So everything's in flash. You need a really fast back-end storage array. So Extreme IO all-flash with sub millisecond latency is a perfect fit. If your database is all in memory, you can't have slow storage behind it. You can't, you'll lose the performance, right? Your database will become degraded. So that was our reason for going that direction. But it's because of the all-in-flash memory of SAP HANA. Now the rest of those infrastructures actually have good use cases for other things, but in this case, for us, it was Extreme IO. So let's focus in on that SAP HANA use case. So SAP, in-memory database, a lot of SIs will tell you, you know what, the storage layer just needs to be fast. Doesn't have to be Extreme IO fast. What did you guys find? What was the specific advantages in the SAP HANA that brought you down to Extreme IO? I mean, the writes are done in memory, so... Well, no, actually the writes actually go to the disk. It is in memory, but it still has to write to disk and get the response back, especially the writes, right? And especially on SAP HANA, it has very specific requirements in terms of when you're loading up the database, it needs to load up in a very specific time period. It's like a tenth of a second that you can have it. You still need an all-in-flash array for SAP HANA, even though it is an in-memory database. That's what that misconception is. People are like, oh, we can put on slower storage. It's like, no, you actually need the storage to be able to respond back to the database as quickly as it does. The minimum requirement, I mean the maximum latency is like a tenth of a second. I mean, it's really low, but it's seven milliseconds, so we have no latency. I mean, we are actually getting the throughput and the performance. And there's other benefits with it as well. I mean, the dedupe, always on the reduction, that's huge, that's a big factor. When you don't have to have multiple copies sitting on your array, that saves you a lot of time. So, like Keith was saying, in crowded market, a lot of options, a lot of choices. What was it for you then, specifically, you said, okay, this is our pride. This is what we want to dance with, so to speak, because, again, you've got a lot of options in there. It was the response that was needed for performance. And it was, Al Flash, we were making a decision on where we wanted to run SAP HANA. We did not have it implemented anywhere else. And we have existing infrastructure, and we were moving to a new data center, and we had to make a decision where we wanted to go. And Extreme IO fit the bill. It meant many of our different requirements. One of them was performance. The second one was the total lower cost of ownership, right? And then the snap technology, that was huge. So let's talk a little bit more about that snap technology. I've spent a lot of time as an SAP infrastructure architect, and one of the most painful parts of SAP operations is being able to refresh, dev, QA, N plus one, the lower environments from production. What advantages have you and your customers seen using snap management with Extreme IO? So let me kind of give you the broader view, and then you can talk about the very specific instances that you have seen, right? So Extreme IO's snapshot technology, we call it Extreme IO virtual copies, they are based, again, based on the in-memory metadata. And Extreme IO doesn't write anything on the SSDs unless it's unique across the entire cluster. Snapshots, by definition, is a copy, till you mount it and make it writable. So for us, when you take a snapshot, it's an extremely fast operation because all we are doing is updating the metadata in memory, and then if you are keeping it as a protection copy, say, for example, like as a read-only, like just to recover from our disaster, then that's one purpose. But then the other purpose is use them as writable snapshot where you can run like your test dev, I mean, copy for backup, all those things. Now, why can we do this thing? The reason is all these copies, they're not consuming any extra space, till like you are writing something unique to it as a test dev copy, right? So now you have the capability of consolidating like lots of copies. For example, in our tradition, I mean, customer base, for every database, there is literally like five to eight copies. So, I mean, 60% of the storage that gets consumed is essentially on copies. Now, if you can consolidate all those copies into the single array without consuming any extra capacity, at the same time, delivering that very high performance, not only for your production environment, but also for your test dev, QA, sandboxing, that gives the customer a lot of values, not only in terms of like infrastructure dollars, but also transforming the application workflows, improving the productivity of the developers and the storage admin, VM admin in general. So that's what we kind of like see across the board from our various customers. Now, like what's your experience? I'm like, wow. Actually, what we do is we're a little different. We actually use the writeable performance snapshots. We use them at our DR site. And what we'll do there is we'll mount those into a test bubble and it is having our production environment, instead of needing a separate dev environment, we can mount basically in a little isolated bubble those writeable snapshots or copies and test anything we want in our two little production environment, right? And then toss it away when we're done. So we can test out a new release or we can do something different with the database or an application. And then when we're done, toss it away that way, we don't need so many different environments built out. So it's a savings there. We don't make the local copies as we're talking about for stage and depth. Those are already built up. But we do put those on the same array now. It used to be you'd have production on one array and stage on the different, right? But now, because they're similar and you want the dedupe and the compression benefits, you want them on the same array. Because then that's where you gain that. So the snapshots we do at the target, we play with those, the writeable, it's performance, right? I mean, it's the same performance as if you're on the source, which is the big game changer there for us. And I think it's really, from a technical perspective, really important to know why Stream.io is so much better at snapshot management. One of the things that vendors will warn us is that snapshots degrade performance over a period of time. So therefore, the fact that you guys have a dedicated metadata subsystem helps improve overall performance. But I'd like to talk about your use case for extending to your DR side. So from DR, DR, what do you guys use to replicate data from one Stream.io to your DR? Right now, we, for us right now with SAP HANA, we're using RecoverPoint with the Stream.io snapshots, which is fabulous because once the two sync up, the first initial sync, at that point, RecoverPoint literally just goes out and gets a snap diff. And that's all the data it's transferring over. So it lowers your requirements of your land. Your bandwidth requirements are lower. So that's what we're using today. It's a great tool for us. And that way, we can mount it at the target site. And then just briefly, we've got, we're about out of time as Shenema, if you would. Going forward, I mean, let's talk about, you know, where you are in terms of development. You know, what do you see as maybe the next critical phase for Stream.io? So in fact, here in Dell Technologies World, we are announcing availability of our native replication technology. Candy mentioned she's using Stream.io with RecoverPoint. That's a great solution. Now, we are going to have the native replication technology. And what's different from other solutions that are out there is this replication is also metadata aware. And as a result, it's not only sending only the unique data over the where, but also it's globally T2 and compressed. And suppose on your target site, you already have a data block that might be unique for your primary site. And hence the primary says like, hey, I need to send over this data to you. And our protocol is going to say like, yep, I have this metadata. I already have it. So send me the metadata pointer to it and we are all done. We don't even need to send the unique block that was in the primary site if it happens to stay or happens to exist on the secondary site. Netting it out as a result, we see great reduction in the WAN bandwidth that's going to be used. And the total capacity that you will need between primary and secondary. So that will also be reduced. In fact, like our numbers that we are going to say, like you can get like 38% less storage capacity wise. And WAN bandwidth could be reduced like as high as like say 75% to 80% based on like the traditional mechanisms. So we actually did a test on this to see the performance between replicating a database using recover point on Xtremio with snapshots. And then we also did it with Xtremio native replication. And it was eight times faster. So it was eight times faster. Business value. Replicating the same amount of data. Recovery point objective. So less data loss in case of emergency, just a higher level of service to the business. Right. You're like an happy customer, right? Yeah. No, I actually love this product. I would not be talking about it. I couldn't tell. I really like Xtremio and I've been doing this for a while. Candy and Shannon, thanks for being with us. We appreciate the time. Thank you. Sorry about the promotion. Hey, take it back for me. Can you convert it though? Can I get that? Good deal. Thanks for joining us. We appreciate it. Thank you. Back with more from Dell Technologies, we're all here in Las Vegas. You're watching theCUBE back in just a bit.