 Okay, we're live, this is Dave Vellante at wikibond.org, and we're back at the Dell Storage Forum. I'm here with Stu Miniman, who's also with Wikibon. This is theCUBE, SiliconANGLE's continuous production of Dell Storage Forum 12. We're here with Brett Roscoe, who's the general manager and executive director of Dell Power Vault, and owns the data management solutions at Dell. Brett, welcome to theCUBE. Oh, thanks for having me, good to be here. So we just had Travis Vigilon, he was talking about sort of the positioning. Let's start with this fluid data architecture. Earlier, we had Carter George on, and I asked him, why is it more than just great marketing through a data architecture? Same question to you. Yeah, no, it's a great question. The fluid data architecture started with kind of a strategy around how do we solve customer problems, right? What are the issues that customers are facing, and how do we bring value and unique capabilities? And the fluid data architecture started that way, and then we developed the tenants around the architecture. And as we've done that, it's really evolved into a kind of a path, or if you will, a way for our developers to really center around kind of the idea of unique capabilities, right? So if you look at the tenants like self-protecting storage, well, the whole idea of that is to develop a portfolio that makes data protection and data recovery very automated and very easy to use. So as our engineers work on those kinds of programs, they can look at those tenants as guideposts to kind of know, okay, I'm working on a program that benefits a customer in this way and provides a solution for these problems. So it's really been a great, and not just an outside thing that we use with customers in terms of explaining the unique capabilities or the ease of use capabilities within our products. It's also a way for us internally to develop products and make sure we're focusing on the right things and the right uniqueness and the right capabilities for our customers. Okay, and then talk about App-A-Sure, where that fits and what the direction is. Yeah, so like I mentioned, we have a tenant called the self-protecting storage. And the idea of self-protecting storage is that more and more customers are moving to a place where they're not just using the traditional backup tools. In other words, I'm not using traditional tape backup every night or I'm not using a media server to do all my backups. I'm using more and more of my primary storage tools to do data protection and data recovery like snapshot replication, right? Because that allows me to have a backup within seconds. That allows me to do recoveries within seconds to minutes. And that's kind of the trend that we've seen that's happened very quickly. And the nice thing about App-A-Sure is that is the DNA of App-A-Sure. App-A-Sure is really based on a snapshot replication-based solution and allows us to go take advantage of deeper tools around scheduling, deeper tools around single mailbox or single email recovery and databases. All those kinds of tools are now can be applied to the snapshot replication technology that we have at Dell. So there's two things we're doing at App-A-Sure. Number one, we're going to make it the best and breed software backup recovery solution in the market. That's goal number one, but also we're going to continue to complement our portfolio. In other words, provide unique solutions, additional rich capabilities within our compelling ecological and power vault lines where you're going to see the App-A-Sure capabilities in those products. So I wonder if we could talk a little bit more about backup specifically, data protection generally. So when you think about the market, it's really evolved from one that was dominated by tape and then data deduplication lowered the, improved the economics of disk-based backup and then you sort of had VTL evolving into these purpose-built backup appliances, which are now a multi-billion dollar market, even bigger than the tape market ever was. So that sort of tells you that there's some additional utility perhaps than tape was able to provide. If I understand it, Brett, you're putting forth a somewhat evolved, actually substantially different vision, which is utilize local snapshots and a continuous data protection strategy and then presumably mirror stuff offsite, replicated offsite, and deal with the backup window problem that way and address shrinking RPO and RTO requirements that way. That is a different vision than just sort of sticking in a large target dedupe device. That's right, that's right. In fact, talk about that a little bit. Well, it's interesting, you bring up a couple of good points. Number one is the market originally evolved to replacing a big tape library and putting in a big disk library and then using the additional performance you had from disk or additional reliability you had from disk as an augmentation to the current environment. Well, the problem is you're not taking advantage of all the disk-based random access and speeds and capabilities of disk. So what you've seen is- You're emulating tape. You're emulating tape. So there's really, yeah. I mean, why make a really high-performance disk drive look like a tape drive? So at some point, we all woke up and said, let's start treating disks like disk and getting more capabilities out of them. The second big trend that we've seen is customers traditionally use snapshot and replication for DR and then used disk-based backup or tape backup for backup. In other words, that was more of a long-term, even used as an archive or if I had a complete site disaster, I could use tape to recover or disk-based backup to recover. We're seeing those worlds come together, right? So the nice thing about Apochure is you can take snapshots up to every five minutes and move those off-site and you're only moving change blocks so it's very network efficient. But the other great thing is if I have a failure on my primary data site, I can stand up a virtual machine or even a physical machine either off-site or even in the cloud in seconds to minutes, right? So think about your data center going down and now the workload, the application you're running can be stood up and running in another location or in a cloud in a very short period of time. Or while you're restoring the data, you can actually stand the workload up and be running it while the restore is happening. So Brett, you mentioned cloud and that seems to be really the wild card in storage especially if you go down market to the SMBs and backup specifically. Where does Dell really see cloud replacing local storage for backup or kind of integrating into the portfolio? Yeah, I always talk about cloud. Cloud is one of the biggest overused terms in the industry right now, but at the same time it's a very important trend that's happening and you brought up SMB, what we're seeing is more and more customers want a way to augment their current IT infrastructure, right? It's not about replacing. Nobody wants to go, okay, I'm going to kill all my IT and move everything to the cloud. But if the cloud can provide an off-site data protection target for me, if the cloud can provide a replication or a site where I can keep an extra copy or fail over to a cloud, those are really good value propositions because I don't have to go rent additional data center space or I don't have to go look for a remote site and do brick and mortar on building another location. Those are value propositions I think are very important to customers and where cloud can actually get some stickiness, right? Yeah, so I'm interested in this whole backup transformation and what Delft strategy is there because backup has been for decades a one-size-fits-all and what I mean by that is everybody does daily incremental and weekly fulls and then if you got a lot of dough you could maybe do an expensive sort of very high end like an SRDF type of thing, but that's really it. Right. And so, but different applications have different RPO and RTO requirements. So are you suggesting that your strategy is to enable customers to align their data protection strategy with their application value? Is that- Yeah, absolutely. Can you guys actually do that today? Yeah, absolutely. I mean, the name Aperture is not an accident. It's app aware, right? It's the product that's designed around application awareness and being able to understand the inner workings of an application because it's one thing to take a snapshot of an application, right? It's another thing to take a snapshot of an application that you can actually, you know, an image that can actually be stood up and run, right? Because anybody can take a snapshot, but it's very hard to understand, you know, if you've got an active database or an active semi-structured database that's got a lot of transactions going on, you need to know when to cohese that, how to get, how to flush cash and make sure that snapshot is crash consistent or app consistent as we call. And so the goal is really about the application administrator, right? The ability of him being able to say, okay, my exchange application or my SQL application is down, what's my fastest point of recovery? How do I get up and running very quickly? Because, you know, a one-day, you know, recovery time objective is unacceptable, right? I need to be, I need to have recovery capabilities that are seconds to minutes. And to your point, historically, you've had to buy very, very high-end application, have very, very big pipes to make sure that that replication data can flow over that pipe and have a second site that can recover that. But the new kind of generation is about, you know, it's about doing very efficient RPOs where you can do, you can track the application, be able to quiz the application and take snapshots every five minutes. It's about being able to de-dupe and compress that data before it's ever sent over the wire. It's about only sending the changed, you know, data over the wire rather than sending the whole, you know, every single read and write operation over the wire. So all those things allow us to get LAN optimization, LAN optimization and application efficiencies that allow a true application administrator to be running up and running very quickly. So what about the notion of these purpose-built backup appliances? Do you not directly participate in that market, or do you do so? Oh, no, no, we have a couple of different products in that market, yeah. Okay, so how does that fit? When should a customer drop in a purpose-built appliance versus sort of take this, you know, CDP approach? Well, those two don't have to be mutually exclusive, right? Yeah, so an applicant, you know, so we have a couple of different prices. We have a DR-4000, which is based on our Ocarina technology, which is a target-based backup, you know, target-based backup recovery device. So you can point any data stream at this product and mount it on an FS or SIFS share, and you can point any data at it and it will de-dupe it in line, store it in the back, and you can recover data, you know, using the high efficiency, random, you know, IO capabilities of a disk drive, but still get much more cost optimization because you're using, you know, number one, you're using SATA drives, number two, you're compressing and de-duping all that redundant data that comes into that device. It's very good for somebody running, you know, your traditional backup applications or where you have a heterogeneous, like, you know, say I'm running, I may be running a backup exec, Commvault, I may be running Tivoli and several other backup applications in my environment. I can use this as a common repository that will de-dupe across all of those different environments and hold that data and allow you even to replicate that data from one to one or many to one, or one to many. The other kind of purpose-built devices we have, we have a DL-2200 product, which we continue to evolve, which is really more of an integrated appliance, right? So it's got all the backup software and the back-end storage to go with it. So we kind of talk about it in terms of being very easy to use. It's a product that if you want to set up and have your environment protected, you can set this box up and within 30 minutes you can have your data environment being protected and backed up using things like de-dupe, using tools like snapshot replication, but it's all in one. It's an appliance that allows you to kind of have your backup application and your backup server in one device. And the backup app in that instance is... Well today we have Symantec, Combo, and of course we're very quickly adding Apochure to that solution right there. Yes, okay, so Apochure is a Dell-owned IP in the software side, which is very important because you're seeing companies, I mean EMSI is a classic at this, they take hardware and they bring software along with it. Other companies like NetApp have taken a different strategy, and personally I've said that I think that the, if you're going to go for it in storage, you better go for it, and a big part of that is owning software assets. No, it's, I mean, getting things integrated and easy to use is key, right? I mean, I can, right, we have a pretty wide portfolio, and I can sell point products all day long. If you want backup recovery, I'll sell you Apochure. If you want primary storage, I'll sell you compelling ecological power vault. If you want, you know, a backup to disk target, I'll sell you DR, but we're really working on making these products integrated, right? So that, you know, when you buy, no matter where you buy within our portfolio, you're going to get, you know, exposure or the ability to use, leverage all of the IP that we have in our portfolio, right? So what does that mean from an integrated standpoint? Obviously you're talking about the management console, that's something that I get to see, but it sounds like it's more than that. It sounds like it relates to how you move data, how you compress and uncompress or optimize and de-optimize if you will. Exactly, well, I mean, they use Ddupe and compression as an example, right? So we have a common Ddupe and compression technology that we use across our portfolios based on our Ocarina acquisition. So, you know, the DR-4000 uses that technology, we're building that technology into our file system, we're building it into, you know, the, any future product that runs Ddupe and compression, we'll use that common Ddupe and compression technology across the portfolio. So that means when I'm running these devices together and inside of my data center, they understand that compression Ddupe technology, they can talk to each other without rehydrating, you know, there's a lot of efficiencies to begin with using a common technology across the portfolio around Ddupe and compression. That's just one example. Why is it, this is a fascinating discussion to me because I made an observation at the top of the show that a lot of times, companies make acquisitions and they just sort of leave them alone, you know, the integration is left for the customer to do. Now, that's somewhat changing, but it's getting a lot harder because these portfolio, the portfolios of these large storage companies are so diverse. Dell, on the other hand, has made integration a priority from early on. Yes. But at the same time, because Dell focuses on the small mid-sized customer, you would think it should, in theory, be less of a priority. Why is it that Dell is so much more focused on integration than, say, some of the larger whales that you're used to. Well, I would argue for the mid-sized customer, it's even more important, right? The mid-sized customer is the guy who's, you know, I don't want to have to have the- Because they're the resources to do that. Yeah, I don't want, you know, to have, you know, the army of IT guys that know how to script and put things together. That's just not true in a mid-sized customer, right? So what they, what we really believe is a mid-sized customer, he wants enterprise features and capabilities. He wants, you know, more and more you're seeing mid-market products have the functions that enterprise products had just five years ago. And so the mid-market customers looking for products are easy to use, but have all the capabilities. Well, frankly, you know, in my opinion, if we don't integrate, if we don't bring an ease of use capability to all this technology, then we are asking the customer to take on the burden of doing all the integration. And that's not acceptable. So I want to come back to this notion. I call it the time machine for the enterprise. I know Dell hates it when I use analogies that involve Apple. But you know what I mean by time machine for the enterprise, dial it up, dial it back up. What's the drawback of that approach? Is there a performance overhead? Is there a, you know, CPU bandwidth requirement? And what are the trade-offs? So I assume, let me make sure I understand your question. So you're talking about the ability to go back to your data and look at it, you know, what you did five minutes ago, what you did 15 minutes ago? Yeah, this, you know, five minute, 15 minute, half hour snapshots that I can dial back, you know, at my convenience. Well, like I said, the Aputure DNA is built on that capability. Now, you know, just like any other application, in order for Aputure to capture a baseline of all the data, it's got to do a point, it's got to take the first snapshot and copy that snapshot over. And so that's what we call creating a baseline, right? And so let's see it. Yeah, there's some work that's, you know, so we'd always recommend run that at night or, you know, let's set some time up for that baseline to set. So that could take days, weeks? Oh, no, no, no. Hours. Hours. Okay, so it depends how big your data set are. But once you've done that, now all you're doing is moving incremental bits, change bits, and that's a very efficient process, right? So depending on what your change rate is, but usually change rates are, especially if you're doing, you know, snapshots every 15 minutes. I mean, how much data changes in 15 minutes? So really, it's kind of, you're trickling over a few minor bits, reconciling it on the target side, creating a new baseline, right? And basically a synthetic baseline, but now you can go back and point to it. And we have the ability to do incrementals forever, right? Which means once you create that baseline, you never create a baseline again. So it's a perpetual incremental once you do that baseline, and to do that baseline, that initial seeding, what do you have, some throttling software that can allows me to, you know, not affect my performance, or is it just? Well, like I said, the performance is so minimal. Once you've done the baseline, that we really don't see a problem, customers reaching in and doing another, doing a snapshot every five minutes. Yeah, I mean, if customers don't like the idea of the application needing to be crash consistent every five minutes, they may need to back off of that a little bit, but it all depends on what the customer needs are. But they can dial that up or down. They can dial it up or down. They can say, I can do it every five minutes. I can do it once an hour. I can do it once a day. But either way, it's a very, very light load on the resources for the application. It's great vision. I mean, I think it's, personally, I think it's the future of backup because we've been chasing the backup window for decades. The battle, that is yesterday's battle. We live and die by the backup window in this industry, but there's no reason to. I tell people all the time, if people keep talking about the backup world changing, and my argument now is it's done. It's changed, right? So we had a survey with ESG the other day, we did it about two months ago, which they did out, they went out and surveyed mid-market customers. 52% of mid-market customers are using replication and snapshot as their primary means of backup recovery on their tier one data. So it's done. We're there. Really, I didn't realize it was that high. That's an option. Well, it's smart, 52%. So, excellent. All right, Brett, well listen, thanks very much for coming on theCUBE today. Thank you guys for having me. For sharing your perspectives. We're live at the Dell Storage Forum in Boston. This is theCUBE, SiliconANGLE.tv's continuous coverage of Dell Storage Forum. Keep it right there, we'll be right back.