 From the SiliconANGLE Media Office in Boston, Massachusetts, it's theCUBE. Now, here's your host, Dave Vellante. Hi everybody, welcome to this special CUBE Conversations. It's part of our partner series sponsored by HPE, Hewlett Packard Enterprise, and we've been drilling into the role that partners play, the value that they add as they emerge as the sort of new breed of nimble system integrator, Tim Ferriss is here, and he's with Green Pages, we're going to do a disaster recovery drill down. It's a topic that is extremely important, it's relevant to this day and age. Jim, thanks for coming on. My pleasure, thank you for having me. So DR, it's traditionally been expensive, complicated, hairy, scary, but necessary. What should we know about DR today in the state of DR? Well, like you said, I think a lot of people have written it off as prohibitively expensive, and certainly in the small and medium business, but with the advent of cloud, with the explosion of cloud services, DR as a service has made cloud based ER, and disaster recovery affordable for even the small business. It's taken a lot of the complexity, some of the complexity out of it. And it's certainly, for some clients, it's the first steps toward a cloud journey. My friend Fred Moore, former storage tech senior vice president of strategic planning is famous for coined the term, backup is one thing, recovery is everything, and applies to DR, fail over is one thing, fail back is everything. And a lot of times it's just too dangerous to test failing back. How is DR evolving, particularly for the small and mid-sized businesses, such that they can have confidence that not only can they check a box for the corporate boards, but if there's a disaster, they can actually recover. This is similar to that phrase. Yeah, DR is not just a replication exercise, right? Not just getting data from point A to point B, but automating that, and automating the testing, and creating run books around that data. I think some things have certainly made that easier over the years. I was an early delivery consultant for VMware's site recovery manager. Thankfully, I've used it much more for in cases of data center migration than I have for actual disasters. But it was a fantastic automation tool, but used other technologies to get the data from point A to point B, and replicate that data. Some things that have made that easier over the years for people, and more affordable, bandwidth is cheaper, so you've got to get that data, still got to get that data from point A to point B, and the pipe was prohibitively expensive. Could it keep up with my rate of change? But bandwidth is becoming less and less expensive and less and less of a hindrance there. The software and the technologies, typically back in the old days, it was a ray-based replication. You needed to have like arrays in production in DR. So I have an all flash array in production, I need that same array in DR. Well, maybe that's, maybe I want to spend money on an all flash array for a use case that I hope I never need, right? You know, I'll test, but never need it. And you know, our partner HPE has done some great things there, letting you replicate from a nimble all flash array to a hybrid array in DR, let some people save some money there. But for our small and medium business, for those who want to get out of the data center business, maybe they want to start with DR. DR as a service has been a big mover for us. A lot of traction with that over the past year or two. So, I mean, one of the concerns, you hear this from security practitioners all the time, is that they're drowning in point products and sort of DR was sort of the same. I, as the customer, had to become the system integrator or had to engage and spend a lot of money figuring out that system. So the DR as a service kind of takes care of all that, doesn't it? It does. It offloads not only the operational maintenance of the DR infrastructure, but you can leverage their years of expertise in DR functions. You know, again, hopefully folks don't have a ton of experience failing over from disasters. You know, hopefully you only have that, never happens or it happens once. But these folks have or season veterans in DR. So you get to not only leverage them, their service, taking care of the operations of it, but you get their expertise for design. So I got to ask you, you mentioned bandwidth. And we always joke, old main framers, that the fastest way to get data from point A to point V is the Chevy truck access method, CTAN. And so, and that was tape in the days. And large companies still use tape. I mean, the big hyperscaler guys use tape. I presume it's not, I presume it's pretty much dead in small business and maybe even, it's a dirty word. I get dirty looks when I mention tape. But do people still use tape for DR? Last resort type of thing. People do, I would say increasingly, if people are using tape, it's used for those work, those less critical workloads. Those people are looking, people, you know, hopefully anybody who's performing a business continuity initiative will tear their workloads. You know, they have their tier zero, those things that need to be up and running hot in the data center. Those tier ones with the RTOs, recovery time objectives in the minutes, tape, you only want to use that for recovery time objectives, maybe in the weeks. I hope you never have to do it. Okay, so pretty much, I mean, I've always hated tape, but it's still not dead yet. No, people are trying to. Okay, so thinking as an architect, let's say I'm a small, let's say mid-sized business, because it's some of the challenges that are there. And I used to have, you know, sort of backup over here in recovery and I didn't even think about DR, it wasn't integrated. What should I be doing in terms of bringing those disciplines together? How should I be thinking about architecting a disaster recovery solution from my client? Where do I start? Well, you should start by assessing the applications. So don't start at the VM level or the physical workload level. Hear from your business, what are those services that they need to provide in the event of a disaster? So a business continuity plan needs to be in place before you should take on a disaster recovery architecture initiative. So having that input is key to the disaster recovery process. So assess, assess what services need to be up and when, tier them, and then investigate. We investigate with our clients several different methods of protection and a DR architecture won't just consist of DR as a service or a physical prem-to-prem replication environment. It could contain many different types of protection. DRAS for some products, for our virtual workloads, application-based hot protection for SQL or database workloads and that sort of thing using native application replication. So a lot of different things you can do and it's not just a one-size-fits-all, it's really a mosaic of things. Tailoring the solution based on the application's value, that gets into, so funny discussions with people who always say, well, speak in business terms and so you sit down with business people and say, what do you want your RPO and RTO to be? And they go, what? We go, okay, RPO. How much data are you willing to lose? And they go, none. How much money do you have? And after you have a problem, how fast do you want to get it back? Well, you talk about instantaneously. How much money do you have? So this notion of recovery point objective, recovery time objective, it's sometimes not business speak. How do you translate geek into wallet and wallet into geek? Well, yeah, assigning, have you asked the question, have you assigned a value to downtime? How much is it gonna cost you to be down? And I don't like to go into customers and hit them with a lot of fud, fear, uncertainty, and doubt. But it should, a good business should value how much downtime or loss of data will cost the business and then use that to determine what they need to spend on DR in order to make sure that that doesn't happen. It's interesting, so, and having had those conversations with many CIOs in the past, and it used to be email was mission critical, and it still is in many ways, but of course, the vast majority of people have outsourced their email to wherever in Microsoft or whomever at Google. And so now it becomes, so the answer to that question is what does it cost you when it's down? Well, it depends what system is down. You know, if it's my transaction system and I'm a retailer online, and it's Black Friday, I'm losing a lot of money. And so do people have a sense of the cost of downtime or the value of their data and their applications? I think a lot of times they do not, and it takes some encouragement in order to help them realize that. I think for some, it's just so, for our retail customers, I think it's just so obvious that to them, they're hyper focused on that value. Just like it's unfortunate, but during hurricane season, we have a lot of conversations with folks about DR because it's top of mind for everybody. For our retail customers, their hurricane season is Black Friday and beyond. They want to make sure that they have a solid solution leading into Black Friday because a minute of downtime can mean thousands and thousands of dollars worth of lost business and revenue. So I think more and more it's becoming a common place for people to put value on it, but you still run into folks who haven't. Okay, so, and I get it, it's an insurance sell, it's somewhat of a fear, an uncertainty sell. It's not a fear of missing out, it's a fear of losing all your data. And so, okay, so let's assume, so you guys can help me get through the business case. Let's assume I get there. How are people sort of moving forward? How fast are they moving forward? And how critical is it for their digital transformations? So how fast are people, people I think are moving, we're having the conversation with more and more folks, more and more folks are finding value in disaster recovery. And we are helping them through that by helping them through that assessment and providing the value. I think another big value for the IT establishment is not just providing a service the best that they can do, but getting some buy-in from the business on let's agree what a reasonable recovery time objective is. And let's agree, understand that I can give you a zero RPO, recovery point objective or a near zero, a synchronous replication, but it's gonna cost X amount of money so that the business is taking some ownership for the quality of the disaster recovery solution and the tightness of the RPO and RTO. And you empower the business to make those decisions by giving them options. And I think we help our businesses, the customers we work with. So it's important. I mean, maybe it's worthwhile getting a little didactic here, but we're talking about RPO zero means essentially you're not losing any data on a disaster, which is very, very, probably is no such thing technically as RPO zero, but the closest is synchronous replication and that sort of thing, so near zero. Right, so you do synchronous replication within some physical metro area, metro area. Of course the problem is if you get hit with a major disaster, then they both go out so you have to do async. Yeah, and under, yeah, frankly, just understanding what type of disaster you're asking me to engineer for. Is it a localized fire in the data center? Is it an earthquake? Is it an earthquake? A hurricane? Regional disaster affecting the whole coast. Katrina, right. Yeah, so understanding what your, or is it, you know, to this day, IT organizations are getting calls from upper management, you know, if they have a power failure in the building, you know, okay, let's fail over to our disaster recovery site and the power's gonna be on in an hour or so and, you know, knowing when to make that decision is critical as well and not using it too trivially. So if you're in a zone where you have a high probability of some kind of disaster that's gonna wipe out both synchronous platforms, you go asynchronous, but then the problem becomes speed of light. There's a little bit of, you know, or it could be a lot of delay. It could affect the performance of the application too while you're waiting for that synchronous rate. So that could be your revenue hit, but it could be, you know, can you handle five minutes of lost data? Yeah, yeah, sure, I can probably recreate that. How about 15 minutes? Yeah, maybe, how about an hour? How about half a day? How about a day? Now you start to get into the business discussion of really what's the value and now you can architect around those things. You can pretty much, if you throw money at it, you can solve any problem as an architect, mostly. You absolutely can. But then there's that balance of the business case, right? Exactly, so yeah, and by showing and what we'll often do is we'll do the assessment and then we'll perform a workshop on various different ways in which we can solve a problem and we can show the client in the business, okay, well we can do what you asked for, it will cost X and that's very expensive, but we can do it this way a little bit differently or combine a couple ways that may increase your RPO a little bit, that they're much more affordable. You know, is that, and they can make a decision based on that. Something you said before Tim resonated with me, which was it's not one size fits all, which says to me, I need the technology to be able to give me the granularity that I can map to the application based on the cost of downtime or the value of the application. And it sounds like I'm inferring that that type of modern technology exists today. Absolutely, so besides just that there are a number of different ways that applications can be protected. You know, Active Directory needs to be protected using its native replication. Oracle and SQL have their own methods of protection, so does Exchange. But virtual workloads, certainly you can dial up or down the protection using DRAS with a product behind it, like a Zerto, a replication and automation, host-based replication capability. And you know, host-based, it makes things a bit easier for clients, because they can very granularly choose individual VMs without having to house them on a specific volume that's replicated and have to do all that mapping in the back end. It takes a lot of the complexity out of things. And you can assign different priorities to those machines. So I could be replicating 100 machines, but 10 of them are more important. I wanna make sure that those 10 get all the bandwidth they need to keep the lowest possible RPO. And certainly there are technologies out there and we are partners with some providers who can let you dial in. What role does HPE play in this whole equation? Right, so HPE, for prem to prem, disaster recovery technologies, it's fantastic, because I think I mentioned it earlier, it used to be we have some very high-end workloads residing in primary data centers living on all flash arrays. So a nimble or a three-par all-flash array, those are expensive technologies necessary to run the business in normal circumstances. But for DR, for a solution that you hope you never need, you can replicate to an all-flash nimble to a hybrid solution, a hybrid nimble in DR, thereby saving yourself some money. So a hybrid flash array, an adaptive flash array in DR that is fronted by SSD and RAM, but costs more like an HDD or spinning disk array. So HPE is allowing us to do some things that help save some money there as well. All right, Tim, thanks very much. It was a great conversation and really appreciate your perspectives. All right, thank you, Dave, my pleasure. You're welcome, okay, thank you for watching everybody. This is Dave Vellante with theCUBE. We'll see you next time.