 Live from Las Vegas, Nevada. It's theCUBE, covering EMC World 2015. Brought to you by EMC, Brocade, and VCE. Hey, welcome back everybody. Jeff Frick here, you're watching theCUBE. We're at EMC World 2015 in Las Vegas, Nevada. It's our sixth year coming to EMC World. It's actually where we started theCUBE many moons ago and we're really excited for this next segment to have our very special guest host, David Floyer, CTO and co-founder of Wikibon. Welcome, David. Hi, Jeff. It's great to be back here again. And we've got a special guest today. His name is Tamir Segal, and he is a senior manager of product marketing manager for Extreme I.O. And we're going to discuss business continuity. Yes. And so, introduce yourself and how long have you been with Extreme I.O.? So, my name is Tamir Segal. I'm senior manager of product marketing in Extreme I.O. And actually I joined Extreme I.O. about two years ago, actually more than two years ago. Initially, I was based in Israel as part, I was involved in the product management of Extreme I.O. responsible for all of the Extreme I.O. data services. And I just recently moved to the Bay Area. Ah, welcome. Excellent, welcome. Sticker shock. Yeah. Well, so this whole area of Flash, it's been an exciting journey, hasn't it? What, from a fundamental point of view, what have you observed about Extreme I.O. going in and the impact it's had on the data center and on all the processes within the data center? So we all know that Flash is disruptive. We know that it impacts the entire ecosystem. So it's not only about spin, it's about the entire ecosystem, whether it is the way that people work with snapshots, the way that they design their workflow, and how actually it impacts the application learn as well as the daily work of the people. So it's really disruptive and people need to adapt and change for that. Right, so the traditional way of doing business continuity has always been pretty well replication, hasn't it? Synchronous or asynchronous replication of one sort or another. So what's different about the Flash? It's obviously the architecture of the Flash is different, everything's different about the Flash. Before we dive into the architecture, the first thing that we need to understand about Flash is that Flash is fast, right? So think about how many I.O.s a Flash array can support, million, million point two with Extreme I.O. 8 bricks. So think about the amount of changes that now need to replicate for one minute in on Flash array versus in one minute in a legacy array. Now think about the one usage that it's now requires in order to replicate those one minute of changes. And other things that you need to take into account is the fact that now you have the ability to consolidate many, many, many different mixed workloads into a single platform, and you have many angry I.O.s applications running concurrently, generating a lot of I.O.s and you need to be able to replicate them all together. Now another aspect of it is that now, if you base your I.O., let's say, based on time, you've probably done that because you calculated, okay, what is the cost to my business if I'm losing one minute of data? But now in one minute, you can get much more transactions than that. So your value per minute just goes through the roof, right? Right, so again, it impacts everything, it impacts the one, it impacts the infrastructure, and also you need to take into consideration that once you turn on replication, and again, we're dealing with flash, you don't want that to impact your predicted consistent performance of the old fresh array. You bought everything fast array in order to get really good SLAs sub millisecond latency. You don't want to see spikes once you turn on replication. And that's obviously one of the issues about traditional replication, even on much lower I.O. rates, traditional I.O. rates. So one of the things that I think will be is completely different in the way that you approach things on extreme I.O. And all flash arrays is the use of the snapshots, the space efficient snapshots, which are both read and write. So can you talk a little bit more about the fundamentals of how you would go about replication or how you would go about business continuity with an old flash array? Sure, so what we've done is, again, the Xtumeo snapshots are very unique. We have in-memory metadata snapshot architecture that enables us to create really fast, really rapid creation of snapshots, and we can delete them. So what do you mean by fast? Again, if I create the snapshots, it is immediately available. There's no metadata block involved. There's no copy of metadata involved in that process. Is it a second or five seconds or a second? In terms of the creation, it's immediate. It's like a fraction of a second or a minute second or even less than that. It's really fast. And again, the Xtumeo in-memory metadata enables a lot of flexibility so I can create very complex topology snaps and snaps without any impacts on performance. Then we will get the exact same performance on snapshots the same as you were getting on production volume. Now, think about how I can leverage the Xtumeo in-memory metadata, the ability that we have a way to rapidly create and delete snapshots without impacting the system. Think about how can I leverage that in order to perform replication? So this is exactly what we've done. Basically what we've done is we're leveraging the Xtumeo in-memory metadata together with our in-memory metadata processing capabilities, an API that we developed, and we're leveraging, basically, we merged it with the RecoverPoint technology. And now what we are doing is we are offloading the other tasks of replication to the RecoverPoint appliances. So actually what we are getting is that once we are adding more and more workloads into the Xtumeo of A, we now can scale adding more X bricks, but we can also scale with the replication, adding more LPAs, again getting the same predicted consistent performance as you would like to get it. Right, so you're able to take all of that. Now what does that mean in terms of RPO and RTO? So it's always been a trade-off between the two, hasn't it? You can take, you can replicate synchronously, but as soon as you replicate asynchronously, you really got a very thin pipe, which you're putting a lot of stuff down. And you've always had to trade off RPO and RTO. So how does this work with an all-flash array? So again, the trade-offs are specifically around the RPO. As fine as you want your RPO to be, more costly it will be for you. In terms of the WAN that you need to allocate, so for example, in synchronous application, you need to design your WAN for the maximum peaks that you will have in the system, because again, you will get delayed because you are waiting for acknowledgement from the other side. Right. If you are willing to live with a longer RPO, then you can drive the cost down. So RPO basically is related more to trade-offs on costs. RTO, I don't think you need to compromise on RTO. There's no trade-off between them. The trade-off is within each one of them. So the RTO that we are able to provide are basically immediate. Any snapshot can be accessed immediately. And again, back to the application that we developed using the Coverpoint technology and the native integration with the XMIO. What we are capable of doing is we are able to provide an RPO of 60 seconds or less at scale. Again, we can support the full potential performance of the XMIO already, and we can replicate it entirely using a single Coverpoint cluster. So you RPO at 60 seconds, and then you do another snap, and then you go through that process again. So the fundamentals of how it works basically, we create a snapshot. And then the first time, for the first time, obviously we need to transfer everything to the remote side. There's no magic here. It's not like this, and you can get the data to appear at the other side. So you really need to sync everything to the other side. And then following that, depending on the RPO settings that you have in the consistency group, we have the flexibility to have different protection policies per consistency group. You can replicate the data. We'll create another snapshots using the XMIO in-memory API. Basically, we will query for differences between the snapshots, and we only sync those changes between the two sides. It's going to be really efficient. Back to the RPO questions. Again, one important aspect of it is the cost of one. Again, by the fact that we're leveraging the heavy tasks of applications to the RPAs, we can actually allocate more resources to reduce the bandwidth that we consume. So that's another important aspect of that. So you can get down to now a true RPO of one minute, and that's amazing, isn't it? I mean, that's completely changes the whole landscape for RPO and RTO. And again, you should not think about the legacy array approach. You should think about the whole flesh. Again, they are disruptive. They generate a lot of aisles. You want a sub millisecond. You don't want it to be interrupted. And again, getting a 60 seconds or even less, actually, we are able to support in some cases, actually less than 30 seconds, RPO. It is really important for your business. Again, it minimizes your business risks in case of disaster. So how's that been for your customers? Tell me some examples of whether you're using this, any specific areas where they are specifically using Extreme IO rather than any other approach. So actually, I'll tell you a very interesting story. So part of the unique capability of the extreme array is again, it's the in-memory method at that time, the snapshot creation, the special efficient snapshots that we have in Extreme IO. So one of the more interesting use cases is basically leveraging the DR capabilities, or the DR site, the copy of the DR, in order to create multiple copies for test and depth. So combining DR with test and depth provides you with a lot of cost savings, provide you with the confidence that you have your production pod in tax and you have your DR and test and depth consolidated. And it provides a lot of efficiencies because now we have the specific efficient snapshots, you can create them whenever you want, you can create as many as you want, and they provide you the same performance as production volume. So that's a pretty impressive story. So the speed just opens up all types of new opportunities of the way that you approach the problems? Right, but again, it's not only speed, it's also about the data services, the efficiencies that you get, the inline duplication, the compression, the specific and snapshots, and now the really powerful application capabilities. If you combine them all, I think it's very attractive and unique value proposition for our customers. Right, and like you said, and just because of the density of the application throughput and the capacity, now your value of time goes up significantly. So this announcement of the recover point, that's part of 4.0, is it? It's part of the 4.0 release. It will be available at the end of the quarter. Available at this quarter, wow. At the end of this quarter. At this quarter, and so talk a little bit more about where you put these, with a recover point, my understanding is you can actually, you don't have to put it on extreme IO, you can put it actually any way you want it to, is that right? Yeah, so actually we allow our customers to leverage existing assets, so it is not mandatory to use extreme IO at the DR side. So basically what you could do is you can replicate from extreme IO at the production side, to let's say a VMAX, VNX, or actually third party arrays using Deeplex Vita. So it really allows you a lot of flexibility in terms of using your existing assets. It's mirror, this is the flash story just continues to get better and better. We're getting the hook here, so I want to give you the last word, don't give away any trade secrets, it's going to get you in trouble, but what are some of the next kind of hills you guys are looking to climb? What are some of the challenges that you're excited about over the course of the next six months before we see you again in 2016? So I don't want to get into too much details, but think about how powerful the extreme IO in memory metadata architecture is, what it is enabling, what an API for that can enable us in the future. I think this is where we'll head, we'll continue in investing in more and more data services for the old flash array in order to improve it and provide new services for our customers. Exciting, exciting times, it's already one of the fastest growing, I think they were saying sales in history of EMC, so that's greatness. Thanks for stopping by, appreciate a lot, hopefully you had a good time on theCUBE. David, as always, love to get you out of the analyst meetings, get you on theCUBE. I'm Jeff Frick, you're watching theCUBE, we're at EMC World 2015, we'll be right back with our next guest after this short break, thanks for watching.