 Live from Las Vegas, it's theCUBE. Covering Dell Technologies World 2019. Brought to you by Dell Technologies and its ecosystem partners. Good afternoon, welcome back to theCUBE, day three of our live coverage of Dell Technologies World 2019. I'm Lisa Martin with my co-host Dave Vellante. Hey Dave. Hey Lisa, how's it going? Good, day three. It's cold in here. It is cold in here. I agree, but we're going to lighten it up with some really good conversation. We've got Rob Emsley back on theCUBE, Director of Product Marketing for Data Protection, Dell EMC. Rob, great to have you back. Oh, great to be back. And we've got Sh0ntel. You brought Adam Schmidt, Network Architect from Customer GEI Consultants. Welcome Adam. Thank you for having me. Yeah, time to heat it up. Yeah, exactly. Exactly. What a great topic to heat it up with, data protection. It's a hot topic, you're right. All right, so before we turn the dial way up on this heat, Adam, give us an overview of GEI Consultants, who you guys are, what you do. Sure, GEI Consultants is an environmental water resources structurally and engineering firm. We focus on anything and everything under the sun from structural, geotechnical, biochemical, pretty much anything and everything engineering. So, important stuff. Talk to us about, before you were using working with Dell EMC, talk to us about your infrastructure, on-prem, hybrid, what were you doing in terms of ensuring that that data was protected, was accessible, so insights can be extracted from it. Absolutely, so GEI is 43 offices east to west coast, and each of those offices has their own actual infrastructure that we have to protect at each site, ranging anywhere between three to 15 terabytes of size. So, we're talking a lot of data and a lot of different geographical locations that I, as a network architect, had to worry about protecting. And one of the challenges of our older infrastructure, we were running Gen40 Avamar servers just doing file level backups and restores, and we didn't have the ability to do any offsite backups in any locations. Now, we did have those in our primary data centers, and we were able to cross backup from each location to another when necessary, but it was, again, only a file level backup. It wasn't an actual full image, and we didn't have a full cloud picture yet that we could expand on going forward. So, not a really robust data disaster recovery strategy in the event that you had to get something like that. It took several times, and there are examples that I could give you where an in-office lost hardware in their actual infrastructure, and we had to do a restore by restoring the files at an offsite location, putting it on a USB hard drive and shipping it to that location, and then having to rebuild the infrastructure from the ground up and copy the data over. Not a timely manner of restore it. Or inexpensive. So, Robin, in the old days, you'd have an admin at the remote office, they'd load in a tape, exactly. And then they'd recycle the tape every day, you'd have it for a week, and then you'd reuse the same tape over and over again. That was the architecture of state of the art back then. Yeah, you probably remember some of those ads that was a picture of a slightly undesirable individual and says, would you like this person to be your backup admin? Which I thought was a little bit strange, but no, I think things have moved on a little bit. What's the architecture look like today? Well, one of the things in architecture is a very key word, because we have a belief in a saying that architecture matters, and when you have a distributed network where you have lots of edge locations and you have the requirement to protect them and bring them back to the edge, the architecture that you deploy really does make a difference. There's a famous Star Trek line, and I've heard it a few times this week, that you cannot change the laws of physics. And if the amount of data that you move from the edge to the core, you want to make it as small as possible. Because if you don't, the amount of time that it takes to get data protected from the edge, especially you have lots of edges, becomes a real constraint. So that was something which GI was able to take advantage of. So can you do that at warp speed? It didn't that change the laws of physics anyway. We don't go there. Okay, so I wonder if you could share with us kind of how you came to this spot. What was life like before? Did you look at any other vendors? You know, give us the paint the picture for us. So working with the Dell EMC technical team as well as the DPS sales team, we were able to come up with a different strategy going forward. But it wasn't after a lot of trial and error when doing proof of concepts with other companies that made promises that they could do the backups that we needed offsite at different locations geographically. But when it came down to it, we were going to have to fork up a lot of money for infrastructure being installed at every single location. Whereas Dell EMC, I don't have to deploy any hardware. All I had to deploy was a virtual appliance at each location. And we were successful in backing up remotely. We tried various technologies that claim that they could do it and they didn't work successfully. So after a lot of trial and error, roughly in total about a year's worth of trying, we finally got Dell EMC's technical team and the DPS team on board. And we sat down in front of a whiteboard in Boston, Massachusetts, and said, this is what we're trying to paint as a picture. Help me paint this as a full blown architecture and make this happen in this design fashion. And luckily the Dell EMC team is so experienced and has so many different strategies that they can focus on. They were able to take every little thing that we needed, mark every checkbox, and deliver a package with DPS for our solution in our own architecture that answered all of my questions instantly. He said virtual appliance. I mean, he's got to run on something. So what is that actually? It's like serverless, right? So we have physical infrastructure at every location. I deployed a virtual sent us box that's a Avamar proxy that talks back to my data domain and communicates the CBT data changes back for backup. So it's not doing a full consecutive backup. That leaves a lot of headroom left over on your actual production server so that it's not pegged while staff are using it. So I can kick off backups during the day. It takes a snapshot and then the data gets backed up without anybody knowing. So this is really important. As you said, Rob, you can't change the law of physics. So imagine you got a straw and you got to put all this data through. It's like when you back up your iPhone for the first time, it takes forever. So you're talking about a changed, just taking the changed data and putting it through that straw, even though it's maybe a little bigger than a straw. So each day it's just a smaller amount of data. Okay, but what happens on a restore? On a restore, same instance. So we'll restore that file if we're doing a file level restore to the data domain and then copy it wherever we need to on the network or if we're doing a full image-based backup, we can restore that either to the cloud disaster recovery into AWS or Azure or we can restore it to the actual data domain and VMotion it wherever we need to after that point. So let's talk about business impact. Is it sounds like there was a lot of trial and error as you explained, really needing to work with a strategic partner who said, all right, I get what you're trying to do. Obviously not easy, but you've been able to implement that. So how is GEI's business positively benefiting from this data protection strategy that you've implemented? Well, not just on a financial perspective because we've eliminated the need for a completely separate offsite data center. We have everything running in a cloud environment for CDR so that we can restore instantly anytime that we need to. So we no longer needed to spend the footprint on another network architect, on another infrastructure, on all the different things that rely on another infrastructure at a separate location. So on top of financial savings for the company, I mean, we saved a huge amount of money there on infrastructure that's only for disaster recovery. It's not doing anything whereas we can just spend money on object storage in AWS and use that as our cloud disaster recovery strategy. When you need it, you pay for it for your EC2 instances but otherwise you're just paying for object storage. It's a lot cheaper than ever having to run a full separate data center. And specifically, what is Dell's role in that equation in terms of the value chain? With the data domain, we also got CDR which allows us to use an appliance on premise to talk to an instant server in AWS or Azure and after its normal backup period, the backup completes and then shoots all the data that changed up to AWS in a S3 bucket and your data stored there in a VMDK chunk data. That after need for a restore can be turned into an AMI for AWS available and then online whenever you need it. Okay. So this is very key. The, you know, on Tuesday cloud was a big topic. Hybrid cloud, the reality for the majority of customers and Adam and GEI, they leverage of AWS is a great example of what many of our clients are looking to do from their investment in the public cloud. Certainly, you know, GEI today is using AWS as a alternative to having to purchase a secondary disaster recovery site or having to sign up with a managed server provider that's providing like a co-location service for disaster recovery. So using the public cloud and using the software capabilities around cloud disaster recovery gives them a tremendous opportunity to save themselves a lot of money and do it very efficiently. It's like though, you know, friends don't let friends build data centers just for DR. Yeah. If you're going to build it for something that gives you competitive advantage, okay. But it's interesting with some of the plans that Adam's got for the future. You know, you want to share some of those as far as what you're thinking about for the next few years. So future plans for GEI is definitely more cloud growth and minimizing the footprint that we have on-premise, making it so that we don't have to have infrastructure at every location. Consolidation of all of our data, obviously going forward, GEI is going to continue growing with data with 4K drone videos that we're modeling for different dam inspections, levy inspections. We're collecting a lot of data, but the problem is having that data geographically everywhere makes it challenging for future admins, including myself, to continue to restore and back up and keep everybody happy. It's a really challenging task to continue supporting. So going forward with consolidating all that data into a central location, i.e. multi-cloud environments, or Dell EMC cloud that was announced this week, we have the option for leveraging multi-cloud instances and being able to keep all of our instances alive in the cloud rather than on-premise. And so you said put it on location. You're talking physically, or is it some kind of logical mapping that you're doing? It'll be a logical mapping with some type of caching technology at the edge site, so that it's ready and available for them. So if mapping magic that allows you to recover really fast if you need to, what about as part of that futures in the roadmap, analytics on that corpus of backup data? So the analytics in terms of how much backups are going on a nightly basis? No, so specifically, are you using that corpus for any other reason? Well, let's see. Might be looking at anomalous behavior, doing stuff with air gaps and investigating that. Other DevOps activities, I mean. It's interesting that you say that because we were talking about a data domain having an air gap last night at an event. And the air gap method making sure that your data domain is protected, it puts it in a write-only mode so that nobody can get into your data domain and actually do any damage to your data because you're right, you're backing up, there are anomalies that happen, if those anomalies happen to get into your infrastructure, into your data backups, you could technically get ransomware or locked out of your own data. Whereas data domain does support air gap technology, allowing you to lock down the system and require two admins before any changes are made to it. So definitely going forward. So you mean read-only? Read-only. Yes, you said write-only. I think I heard that, but yeah. But it's a good question with respect to data reuse. It's the use case that Adam is currently using is to use AWS as a disaster recovery location. But the ability to spin up his data within AWS, yes, for the purpose of insurance, being able to access those production copies within AWS, but why not be able to use those for other purposes, such as interrogation of the data that was in them? That's all things that really start to evolve the conversation from what you do for data protection to what you do to data management. Yeah, so let's use some of the tool chains in the live in AWS SageMaker, for example, and apply some machine intelligence and machine learning to see what we find there and maybe anticipate anomalies or find some things that we didn't know. Absolutely, especially when users are dumping large amounts of data. We had an instance where before we started actually seeing large data dumps, when our data started to grow in the first place, we were inspecting levies and models in Colorado and we had three engineers fill up an entire server of 4K videos and our nightly backup all of a sudden said, hey, you just got a huge amount of data change in an instant. Were you expecting this kind of change? If not, you should probably start knocking on someone's door. So we were able to use that analysis really quickly. So looking at day three of Dell Technologies World, lots of announcements, Rob, you kind of talked about some of those. You know, cloud-enabled data protection is becoming a big focus for you guys. I'm curious, Adam, to get your thoughts on some of the announcements. You mentioned the VMware on Dell, Cloud on Dell EMC. What are some things that really kind of piqued your interest as, hey, we're going to have more and more data coming. We've got lots of edge devices. They talked yesterday about the edges coming. What did you hear that you thought, awesome, this is really going to be integral part of our strategy going forward? Definitely. So one thing that was mentioned was PowerProtect and that has everybody's interest right now because having the ability of basically an Avamar system with all flash or a data domain with all flash gives you obvious IO advantages in the future. That's probably going to be my next hot topic that I'm very vigorously researching everything out to see if in a couple of years or sooner that's going to fit into GEI's infrastructure and give us more benefits going forward. What's your biggest data protection challenge in 2019? Our biggest challenge upfront was definitely moving from one backup strategy to a new backup strategy and that's from file-level backups only to image-based backups. That was one of the biggest challenges because anytime you lift a backup infrastructure out of production and put a new one in you're starting from zero. You can't really start from where you left off. You have to get all of that data and geographically, 43 offices doesn't seem like a lot but when you're collecting data at all of those locations that was a challenge getting everything worked out and getting everything backed up in the first place. All right, so you're knocking down that problem if you're in a private meeting with Rob and his engineering team's there what's the one thing that he could do to make your life easier? One thing he could do to make my life easier is to say drop prices. Oh, sorry. Well, then I have nothing else to say. Well, it sounds like you do. Really, is that what you were going to say? No, so if we could enhance the performance of DDBoost DDBoost already does a lot of performance benefits for what we do. DDBoost in essence of what your network performance is if there was a way of tweaking that on new servers when you implement it, for example we acquire companies every now and then. We're implementing their strategies for their backups and we have to start new backups. If there was a better methodology of seeding rather than having to go out physically, plug in a hard drive in an NFS storage, make a clone of it and transfer it back. If there was a different method of seeding that technology or those backups that would make things a little bit easier. Awesome. Get on that. I mean, nobody can ever have enough performance. Yeah, that's right. And as Adam said, the big part of the PowerProtect announcement yesterday was the introduction of the industry's first all-flash purpose-built background appliance with integrated software capabilities. And all-flash I think over the coming years is going to become a definite option for secondary storage workloads. Not only for the straight performance of backup and restore speeds, but also for this huge opportunity around data reuse. And I think that you'll start to see more all-flash appearing in the data center, not just for production systems, but also for secondary workloads and where you're storing copies of production. And at the end of the day, it sounds like you're probably quite the hero to all those folks that need making sure they have access to that data because that's what is, as we say, it's Michael Delson, it's inexhaustible, it's gold. That's what drives the business forward. That's what allows you to identify new products and new revenue streams. So we'll say congratulations on being an enabler of the business so far. We appreciate you guys sharing what GEI is doing and Rob, we appreciate your insights as well. We thank you for spending some time with us on theCUBE. Thanks guys, thank you. Thank you so much. Oh, our pleasure. For Dave Vellante, I'm Lisa Martin. You're watching theCUBE Live, Dell Technologies World 2019, day three of theCUBE's coverage continues in just a moment.