 Hi, I'm Peter Burris and welcome to another Cube Conversation from our beautiful Palo Alto study at studios. Today we're here with Clint Wycoff, who's a senior global solutions engineer from Datrium. Welcome to the Cube, Clint. Well, thanks for having us, Peter. It's great to be here. So, Clint, there's a lot of things that we could talk about, but specifically some of the things that we want to talk about today relate to how cloud use as it becomes more broad-based is now becoming more complex. Concerns about as we use more cloud, we still have off-premise, how do we then sustain how that we get more work done? And that crucial role that automation and human beings are still going to play as we try to achieve our overall goals with data. So why don't you tell us a little bit about some of these themes of simplicity, scalability, and reliability? Yeah, definitely, Peter. You know, it's been a very interesting time over the last 12 months here at Datrium. You know, we've been on a rapid release cycle. We've actually released DVX 4.0 of our software just a few weeks ago, and maintaining focus around those three key talking points of simplicity, scalability, and reliability. That's really what the Datrium DVX platform is all about. And it's about solving customer challenges that they have with their traditional on-premises workloads that they virtualized. And we're also seeing an increase in customers trying to leverage the public cloud for several different use cases. So kind of the biggest takeaway from our perspective with relation to the latest release of our software is how can we integrate what the customers have grown to love on-premises with their Datrium DVX platform, and how can we integrate that into the public cloud? So our first endeavor into that area is with Cloud DVX. And that integrates directly into their existing AWS subscription that they have. So now that they have on-premises Datrium running for all their mission-critical, providing tier one systems of all the performance, cloud backup, all those capabilities that they've grown to love, but how can I get my data off-site? That's been a huge challenge for customers. How can I get my data off-site in an efficient fashion? But in a way that doesn't look like an entirely different, new, or a completely independent set of activities associated with AWS. So talk to us a little bit about, you said something interesting. You said it integrates directly into AWS. What does that mean? Yeah, so we've taken a direct port of our software. So we have on-premises customers run ESX hosts, okay? In AWS terms, that translates into EC2 instances. So the first thing that we do is we instantiate an EC2 instance outside in an AWS subscription, okay? That means my billing is, my billing, my management, my console, everything now is the same. Exactly, right? And then we're utilizing an S3 bucket to hold our cloud archive. So the first use case for cloud DVX in its current iteration is for off-site archives of Datrium snapshots, right? I run VMs on-premises, I want to take a snapshot of these, maybe send them over to a secondary location, and then I want to get those off-site for more long-term archival purposes. S3 is a great target for that, and that's exactly what we're doing. So an existing customer can go into their Datrium console, say I want to add my AWS subscription, click next, next, next, finish, and it's literally that easy. We have automated Lambda functions with it automatically spin up the necessary EC2 instances, S3 buckets, all that stuff for the customer so they completely simplify the entire process. And I like to think of it almost like, if you look at your iPhone and you go into your iCloud backup, there's literally just a little slider button that says turn it on. For us, it's literally that simple as well. How can we help customers get their data off-site efficiently, okay? Now that's a key kind of point for us here at Datrium and the fact that we have a global deduplication pool. That means the only data that's ever gonna go over the wire is truly unique. So we have built-in blockchain type crypto hashing that goes on, so as data comes in, we're gonna do a comparison on-prem, off-prem, and only send the unique data over the wire. That is truly game-changing from a customer perspective. That means I can now decrease my RPOs. I can get my data off-site faster, but then whenever I want to recover or retrieve those block or those virtual machine snapshots, it's efficient as well. So it's both ingress and egress. So from a customer perspective, it's a win-win. I can get my data off-site fast and I can get it back fast as well. And it ultimately decreases their AWS charges as well. That's the point I was gonna make, but it's within the envelope of how they want to manage their AWS resources, right? Yep. So this is not something that's gonna come along and just blow up how you spend AWS. If you're the AWS person, so we've heard what the Datrium console person can do, if you're an AWS person, you're now seeing an application that has certain characteristics, performance characteristics associated with it, cost characteristics associated with it, and now you're seeing what you need to see. Exactly. Yeah, we kind of abstract the AWS components out of it. So if I'm in AWS console, yes, I see my EC2 instance, yes, I see an S3 bucket, but you can't make heads or tails of what it's kind of doing. You don't need to worry about all that stuff. We manage everything solely from a Datrium perspective, going back to that simplicity model that the product was built upon, is how can we make this as simple as possible, right? It's so simple that even an admin that has no experience with AWS can go in and stand this up very, very easily. All right, so you've got some great things going on with being able to use cloud as a target. What about being able to orchestrate resources across multiple different potential resources? How is that started? How does some of the new tooling facilitate that or make it more difficult? Well, that's a really great question, Peter, but it's almost like you're looking into the crystal ball of the future because the way that Datrium, the product itself and the platform is architected, it's kind of building blocks on top of each other. We started off on-premises. We've built that out to have a scale-out architecture. Now we're going off-premises out to the public cloud. Like I said, the first use case being, just being able to leverage that for cloud archives. But what if I want to orchestrate that and bring workloads up inside of AWS? I have a VMware snapshot that I've sent or a Datrium snapshot that I've sent off-prem. I want to now make that an EC2 instance or I want to orchestrate that. That's the direction that we're going. So there's definitely more to come there. So that's kind of the direction in what the platform is capable of. This is just the beginning. Now the hyper-converged concept is very powerful and is likely going to be a major feature of being able to put the work where it needs to be put based on where the data needs it. But hyper-converged has had some successes. It's had some weirdness associated with it. We won't get into all of it. But the basic notion of hyper-converged is that you can bring resources together and run them as a single unit. But it still tends to be more of a resource focus. You guys are looking at this from slightly differently. You're saying let's look at this as a problem of data and how the data is going to need resources so that you're not managing in the context of resources that are converged. You're managing in the context of the resources that the data needs to do what it needs to do for the business. So I got that right? Yeah, I mean hyper-converged has done a lot of really good things. First and foremost, it's moved flash to the host level. We're moving a lot of the latency problems that traditional sand architecture has. We apply many of those same concepts to what Datrium is. But we also bring a lot of what traditional sand has as well, being durability, reliability on the backside of it. So we're basically separating out my performance tier from my durability capacity tier on the bottom. Based on what the data needs. Exactly, right. So now that I've got these individual stateless compute hosts, where all of my performance is for ultra low latency. You know, latency is a killer of any project. Most notably like VDI for instance, or even SQL server or Oracle. You know, one of the other capabilities we actually just added to the product as well is now full support for Oracle RAC running on Datrium in a virtualized instance. So latency, as I mentioned, has been a killer, especially for mission critical applications. For us, we're enabling customers to be able to virtualize more and more high performance applications and rely on the Datrium platform to have the intelligence and simplicity behind the scenes to make sure that things are going to run the way that they need to. Now, as you think about what that means to an organization. So you've been at Datrium for a while now and how are companies actually facilitating the process of doing this differently? Are they starting to, are they doing a better job of actually converging the way that the work is applied to these resources? Or is that something that's still becoming difficult? How is the simplicity and the automation and the reliability making it easy for customers to actually realize value of tools like this? It's actually, it's truly amazing because once our customers get a feel for Datrium and get it into their environment, I mean, we have customers all across the world from Fortune 500 customers down to more small, medium-sized businesses, financial, legal, all across the entire spectrum of verticals that are benefiting from the simplicity model. I don't have to worry about, you can go out to the Datrium website and we have a whole list of customer testimonials and the one kind of resounding theme that goes across that is I no longer have to worry about managing this, this storage, the infrastructure. I'm now able to go back to my CIO or my CEO and I can provide value to the business. I'm doing what I'm supposed to do. I don't have to worry about managing knobs and dials and hmm, do I want to turn compression on or maybe I want to turn it off? For what size volume do I need? What Q-depth, that's kind of mundane tasks. Let's focus on simplicity. Things are going to run the way that you need them to do, the way that you need them to run. They're going to be faster and it's going to be simple to operate. Well, at Wikibon we like to talk about the difference between business and digital business is data, that a digital business treats its data as an asset and that has enormous implications how you think about how your work is institutionalized, what resources you buy, how you think about investing. Now it sounds as though you guys are thinking similarly. It's not the simple tasks you perform on the data that becomes important. It's the role the data plays in your business and how you turn that into a service for the business. Is that accurate? That is very accurate and you brought up a really good point there in the fact that the data is the business. That is a very key foundational component that we continue to build upon inside the product. So one of the kind of big capabilities and you've seen a lot of this in today's day and age with ransomware hacks and data breaches. I mean it's almost every other week you go on CNN or pick your favorite news channel that you care to watch and you hear of breaches or data being stolen. So encryption, compliancy, HIPAA, Sarbanes-Oxley, all that type of stuff is very important and we've actually built into the product what we call blanket encryption. So data as it comes inbound is encrypted. We use FIPS 140-2 either validated or approved mode and it is encrypted across the entire stack in use over the wire in flight and at rest. That's very different than the way that some of the other more traditional folks out there do it, right? If I look at a sand, it does encryption at rest. Well, that's great but what if, well the data's in flight. What if I want to send it off-premises out to the public cloud? With Datrium, all that is built into the product. And that's presumably because Datrium has a greater visibility into the multiple levels that the data's being utilized. Absolutely. Which is why you can apply in that way and so literally data becomes a service that applications and people call out of some Datrium managed store. They're absolutely, yep. So think about what's next. If we think about the, you mentioned for example, that when we had arrays with sands that we had a certain architectural approach to how we did things. But as we move to a world where we can literally look at data as an asset and we start thinking not as the task you perform in the data but the way you generate value out of your data. What types of things, not just at Datrium but what types of things, what types of challenges is the industry going to take on next? So that's an interesting question. So in my opinion, this is Clint's personal opinion, the way that the industry is changing in regular administrators, they're trying to orchestrate as much as they possibly can. I don't want to have to worry about the low hanging fruit on the tree. How can I automate things so that whenever something happens or an action happens or a developer needs a virtual machine or I want to send this off site to DR. What if I can orchestrate that, automate it, make it as simple to consume because traditionally IT is a bottleneck for moving the business forward. I need to go out and procure hardware and networks which is all the type of stuff that go along with it. So what if I was able to orchestrate all of those components, leveraging API calls back to my infrastructure and so like a user has a web form that they go in to fill out, those challenges are the types of things that organizations in my opinion are looking to overcome. Now I want to build on that for a second because a lot of folks immediately then go to, oh, so we're going to use technology to replace labor and while some of that may happen, the way I look at it and the way we look at it is the real advantage is that new workloads are coming at these guys at an unprecedented rate and so it's not so much about getting rid of people. There may be an element of that but it's allowing people to be able to perform more work with these new technologies. Well, more work but focused on what you should be focusing on. Of all the senior executives that I- That's what I mean. Yeah, all the senior executives that I talk to, they're looking to make better use of IT resources. Those IT resources are not only what's running in the racks in the data center but it's also the gentleman or the lady sitting behind the keyboard. What if I want to make better use of their intellectual property that they have to provide value back to the business, okay? And that's what I see with pretty much everybody that I talk to. Clint, this has been a great conversation. So once again, this has been Clint Wycoff, this has been a Cube conversation with Clint Wycoff who's a senior global solutions engineer at Datrium. Clint, thank you very much for being on the Cube and we'll talk again. All right, thanks Peter. Once again, thanks very much for sitting out on this Cube conversation. We'll talk to you again soon.