 Live from Miami Beach, Florida. Extracting the signal from the noise. It's The Cube. Covering .NEXT conference. Brought to you by Nutanix. Now your host, Dave Vellante and Stu Miniman. This is The Cube. We're here at Nutanix, next to the conference. Don Foster is here as the Senior Director of Product Management at Commvault in the office of the CTO. Don, welcome back to The Cube. Good to see you. Thank you, Dave. Thank you, Stu. We met each other a couple of years ago. I think it's your first time on The Cube, right? Indeed. It's my first time, yes. Thank you. So, yeah, it's Commvault. We're branding. We're going to talk about, I'm sure, we're going to talk about Sympana. You guys have always been a leader in the back up. Just hit the website to check out some of that new branding. So, give us the update. What's new at Commvault? And what's new with you? Yeah, sure. So, as you went through our website, and I'm sure as a number of our customers have probably noticed, we have had a few changes within our branding. And it's not just about trying to change up colors and logos and necessarily just modernizing the image. I mean, that is definitely part of it. But we also want our customers to realize that we know that the industry itself is changing the needs and the way the data center and the way infrastructures are driven. It's all about the data. And we want our customers to know that we're also focused on data and that information and trying to make it look more like not necessarily a challenge that they have to manage, but really more of that next strategic asset where they can start to drive more business value from. Well, I mean, that's been a theme that for decades of information, information risk, backup, backup pain, on and on and on. And then with the big data and Hadoop movement, things really flipped, didn't they? People started to look at information as an asset more so than a liability. How has that changed your business? Yeah, I mean, so as customers are making, taking a look at all these new potential infrastructures, ways they can actually manipulate their own information. You know, being a software company, we realize they have a choice in how they want to do that. And in many cases, have a number of different solutions that they might want to leverage together to meet the actual business needs and the goals they're trying to drive and extrapolate that business value. We're helping customers do that by giving them the ability to have that choice, to leverage the infrastructures that software applications that they deem pertinent, allowing them to then manage and protect and share and provide the compliance that they need back to their organization based on that information and also help to extrapolate more business value for. So what's going on at next? Maybe start with a relationship with Nutanix. What's that all about? Sure, so we're a platinum sponsor here at .next and we've been working with Nutanix for probably the last year or so and we're really working to provide really tightly integrated products for data protection and the overall data management solution within the Nutanix infrastructure. Again, it goes back to that whole sort of choice. Customers have a choice on what hypervisors they want to use. Why not use a hyper-converged infrastructure to build that out? Why not allow yourself to have an open software platform that can give you all the data management needs that you may require regardless of the cloud infrastructure, the hypervisor infrastructure. So we're working with Nutanix to ensure that we can tightly integrate our two products that's together, give a customer really a no compromised solution on how they can manage the information, protect it, drive it across the life cycle of their needs and still extrapolate the business value that they're seeing today on their non-hyper-converged infrastructures that they make that sort of movement. So what does that mean integrated? So that's engineering level, sort of talking testing, sort of out of the box solution. Can you unpack that a bit? It's a mix of both. So it's validating that infrastructure is running or more traditional VMware and Hyper-V sort of an open stack configurations on Nutanix will also work with Commvault but then also going those next two and three steps deeper. What areas can we work together with part of the prism and acropolis infrastructure that Nutanix announced here at their .NEXT conference? How can we get more in depth and granular with Nutanix, provide more business value back on how we can integrate together? And those are the things we're working on. So Don, Nutanix really talks about how infrastructure and especially storage and storage related activities are so complex. You've got the entire information life cycle we've talked about for so many years. And they talk about really just making it all invisible. Does that resonate with what you guys are doing, tying to your products, has the invisible message? It absolutely does. So part of what we've been doing is also trying to automate and make the whole understanding when new workloads are created, how they have to be managed, how they're going to meet necessary workloads and even tying back to some of those simple infrastructure components, trying to make even more advanced, really complex ideas become as simple or invisible as possible. In fact, Nutanix is a member of our IntelliSnap Connect program. So for those out there that may not know what IntelliSnap is, it's the way that Commvault takes all the rigors, all the complexity of trying to drive a hardware based snapshot of a single virtual machine or of a single application ensuring everything is consistent and allowing that snapshot to be the precursor for your entire data management process. So by being a part of a member of that program, we're working together with Nutanix to ensure that we can take their infrastructure and platform and then provide that next level of access and recovery. Yeah, I mean, if you look back, it's very much customers have a multi-hypervisor and multi-cloud world. So it used to be simple. You knew pre-virtualization. I knew everything lived on that server. Then it might live on this hypervisor today. My application in many ways is going to live a lot of different places. My data is all over the place. It sounds like that fits into your IntelliSnap. It puts us in a great position because the way we help manage those hypervisor environments, we don't really mind what hypervisor uses as a customer. We have a solution that can fit the needs across the board. So what about the go to market? Talk about that a little bit with you guys at Nutanix. Yeah, so I mean, today it's really kind of meeting in the channel with our joint channel partners, also meeting at the customer, really understanding what customer needs are and how our channel partners are hoping to go and solve those needs. That's really kind of how we're doing that go to market today. And that's probably how it will be focused here, at least in the viable short-term future. So, talk about the new branding. What's the genesis of that? What's the impetus behind that? What's the new brand say? So the new brand is all about, new brand is all about kind of modernizing and realizing that data has become your next strategic asset. Stop looking at data growth as a challenge, but now truly start thinking about it as something you can leverage to drive better business value, to really improve your standing as an IT member. So the way the brand is kind of bringing itself together, it's all about kind of sharper thinking is the whole idea behind it. A more modernized, sharper thinking in how we can actually take the information, the data we're collecting, and turn it around to speed business value back to that end user. Whether it's driving corporate compliance, reusing that information for other business generating processes, there's a number of different use cases all the way out to actually providing more efficient use cases for the end user. All that ties back into this whole sharper thinking idea and helping customers deliver on those business outcomes, not necessarily just technical features, but showcasing how that can drive some really valuable business outcomes. So let's talk about that some more. I mean, the value proposition of backup has always been insurance. I'm going to pay a bunch of money and you're going to protect my data and hopefully if I ever need it, I can be able to get it back. Hopefully nothing will happen, but if I do, it's there. And everybody hates buying insurance, but they have to do it. What I'm hearing is that you're transcending that sort of insurance business value that I'm calculating is how much data I might lose, how much business I might lose, to one where I'm leveraging across the platform compliance, other maybe insights, and the like. So like development and testing, even running some form of analytics and reporting, we just recently announced, I think two or three weeks ago, an ability for us to run a development and test applications in the cloud with Amazon, AWS, and Microsoft Azure. So taking that backup information and allowing you to repurpose that to drive faster dev and test environments and dev and test workloads, right? Speeding that creation, the generation, the allocation of those compute resources, not only on premise, in a private cloud or on your current on premise hypervisor solution, but also often those public cloud workloads as well. And that really turns again, kind of flips the idea of backup as an insurance on its head, because you can start using this information for more than just the what happens if someone deletes something in my data center. It starts turning into a true business generation or a kind of an innovation generation process. And those are the solutions we're now delivering to our customers. So I mean, backup's always been kind of this afterthought, you know, bolt on. Oh, I got an application, I'm going to deploy an application, I'm going to build some, I got to back it up. How is backup changing and how is Commvault sort of advancing that change from both the technology and an operational standpoint? Sure, so I think it starts with realizing you're backing up so you can recover. So you can get access to that information. So the way we continue to innovate is trying to give customers the fastest possible way to actually get access and information back when they need it, and also providing as many different ways to share that information, whether it's across different end users, whether it's across different constituents or tenants, or even if it's just for different business processes. It's the way we're making that information open so that once you touch the data, you touch it once, you have all the lifecycle policies assigned to that information, so it's no longer just sitting there as a long-term copy, you don't have access to our awareness of, but then also opening up for a number of different, like I say, business generation or analytical based reports that you can run against that environment. Tom, what's the technical enabler there? I mean, I'm sure it's a combination of things, it's a catalog, a lifecycle management platform, some kind of data movement engine. So take us through that. Yeah, sure, so it really starts, there's really three keys here. It all starts with that single index. So you kind of look across the industry at what data management and data protection in general has meant. If you're a customer and you're trying to get solutions to manage your Oracle environments, your hyper-advisor environments, potential environments in the cloud, end user, and then maybe expanding out into data management methodologies for archive and compliance journaling, et cetera, you may have a multitude of solutions. With each of those different solutions or silos, you have different indexes and indices that you have to go and scan across and search against at any point you need to actually bring information back, produce it for some compliance or governance need, and that just becomes more and more complex. So we start off by providing that one single index that right out of the gate, ultimately simplifies where that information is being stored. So that's the first key, we understand where it's being stored, where it came from, we give a really broad and deep amount of information. A sales guy who worked with once said, it's the banana story. So other companies will give you, hey, that banana came from Ecuador on this date. Put it over here in that box. When Commvault collects information, the amount of data that we get for that breadth and depth of indexes, hey, that banana came from Ecuador, it was picked on this day of the month, it came from this type of a banana tree and was shipped on this type of boat and arrived in our port at this time and it should be put in this box. So a lot more breadth and depth on just what information's coming. So that's the first big key because it allows us to really know where data's going, where it came from and necessarily how important it might be. The second key to that then is really providing all the management constraints or the management automation, trying to make it invisible, right? Once we understand the potential details of the med information that we're collecting, knowing how to manage it, orchestrate it, retain it, when to delete it, all becomes key regardless of the actual physical or virtual infrastructure the customer uses. Then the final key there is the fact that we provide one virtual repository that manages that entire life cycle. So whether we're talking snapshots that live inside that production storage array or that production hypervisor infrastructure, all the way out to a glacier-based, long-term deep cloud archive, we still have the knowledge of that index and the management to be able to pull that information back and apply back to those business value routines. So that's pretty cool tech. I wonder if we can unpack that even a little bit further. So the single index, so that's if you're creating metadata, more metadata than you're collecting, yes. Okay, so then your banana store. And that's done on an automated basis, I presume. Correct. So you've kind of got an automated data collection engine and you can classify that presumably. You got it. And then that goes into some kind of policy engine that you then manage. You have a metadata management system that allows you full visibility on the life cycle of the data. And it's really all integrated together. So the moment we collect that data, whether we're talking from a backup point of view, whether we're talking from doing something like archiving, trying to do a storage-based management, or whether it's just an end user, actually, saving information up into their own mobile end user drive. Any way that we collect that information, we're starting to scrape that meta information so we can understand who it came from, what its value might be, applying it to the policy managers, and then from there, being stored in the appropriate virtual repository. Can you do that with agents? We do that with agents, and we also do that by tying in with a number of our partners by calling the necessary APIs to scrape as much meta information as possible. So you can do that down to what level? Server, device, machine? So a perfect example, we're here at .next, right? We're talking a lot about compute workloads here, virtual machines, right? We can give you information on the compute workload, the virtual machines that make up that workload, go down into the actual files and the constructs of the actual virtual machine itself, even go as deep as the applications and the databases and the files that make up those databases inside the virtual machine workload, and provide access and recoverability and across the board there. So you're not just looking at one per view of the information, the depth comes at granularity down to single files, single objects, single database, all the way up to the whole entire compute workload, be it one virtual machine, multiples of them, and then giving you the portability to provide them access in whatever compute workspace necessary. And your system of record, if I could use that term, is the backup server, or is it more of a distributed architecture? It is a distributed architecture. We do still have a, called a brain or a master that drives everything, and yes, that is highly available for all the HA and BCP needs for customers. But beneath that, we have a distributed architecture that helps us capture all that mid-information. So your brain, you're protecting the metadata like you would protect your child. Pretty much, making sure we always know where it is any given time. Okay, and then you can go down, do you go down to the device level, or do you pretty much stay at the sort of enterprise server level? Well, when you say device level, meaning? Laptops. Absolutely. Mobile. So we... Because risk, by its very nature, is distributed these days. Absolutely. So we actually have specific solutions, and it's kind of the power of the platform, and it goes back to that silo conversation in the one index. This solution will span not only physical servers, virtual servers, information and workloads that might live in the cloud, but it also touches the end user. We have the ability to actually collect information off of laptops. In fact, I myself use it to help ensure that my productivity remains high. So I have multiple devices. I'm synchronizing all of my business-generated content between my corporate headquarters and my two remote devices. I have access to my information through my phone at any time. I can share that with Dropbox-like capabilities amongst my own internal constituents and those that are outside the business. And all that happens through the enablement of what that virtual repository and that one index provides our customers. So our web services tie all this together to enable that enterprise capability without having to pull in another solution or more storage. I mean, that's like iCloud for the enterprise. You've got it. That's exactly it. And then you talked about deleting stuff. It's a big issue, right? So I touched on classification before. Do you sort of auto-classify the data upon creation or use? How does that work? So there's a couple of ways customers can achieve that. Yes, we can do classification beforehand and then put the necessary policies in place to help direct the information to the right storage location, right policy. In many cases, what customers will also do is they'll leverage our ability to actually classify it after the fact. So if you think about your data lifecycle, your data lifecycle time scheme or timeline, you're typically protecting information, albeit every seven days, 14 days, as many versions as possible for business continuity, right? To ensure that you don't lose data. Well, in that 14 day time span or maybe it's 30 or whatever it might mean, we're collecting an order amount of meta information behind those unstructured and structured files. From there, after we collect that information, we can actually go through and cull through the actual data we've protected and collected and then ensure that that information gets stored to the right policy based upon its content, even the context of where it came from, even tie that together for real in-depth content and context-based classification so that if it's a contract that has to deal with finance and has Don Foster's name attached to it, it gets kept for 30 years. We can do that after we've actually touched the information. So that allows us to keep that data lifecycle disaster recovery or BCP process intact for that entire 7, 14, 30 day period and then start actually pulling the information from that to be stored for a longer period without ever making another copy. It's the benefit of that single virtual repository, right? Of being able to know where that information lives at any given time. So very efficient, but at the same time protected. You got it. And then you mentioned deleting before, the big conversation and sort of governance circles is one policy is keep everything forever, which in this big data day maybe makes sense. There's a value in them that are nuggets. The flip side of that is there's risk as well. I don't necessarily, if I'm a pharmaceutical company and I want to save work in process where somebody said, ooh, this might be an issue or whatever, just as an example. So this notion of defensively deleting data, how do I know when it's time to delete? Can you help me with that? You and the combination of ISVs, I presume. Maybe talk about that a little bit. And also internal policy, whenever we talk to customers about how you want to manage those retention policies, when you want to delete, rule number one is at least have a policy in place, right? And whether that is, I want to delete certain information after a certain amount of time. You actually have to have some framework, some rules that are put in place. That's really important because a plaintiff's attorney will attack your policy if you don't have one. Exactly. So start there. Exactly, and then from there it's all about classification. Now the great thing is that at some point you decide you need to change your policy because of having that one index and that one virtual repository. You need to change your policy where a certain classification of information suddenly no longer needs to be kept for seven years. It needs to be deleted today or maybe kept for three years. We can retroactively go back and ensure that all the information fits that policy based on that content classification. So that's the power. We go back to having that one index, that one repository, that one set of policy management. That's really what changes up versus our competition that may have acquired multiple stacks because you don't have to go to four or five or 10 potential different places to manage where the information lives and if you have to change or delete it, you only have to do it once. And this is Simpana, right? This is part of our Simpana platform. Awesome, well I'm hearing a lot of consistency with what I hear from Nutanix do. I mean, what we call inter-cloud, multi-cloud. You know, the whole notion of making things invisible. So it sounds like guys are just getting started on a good road. What's next for you guys at Commvault and your relationship with Nutanix? What should we be looking for? Well, I think what's next is us really starting to put the partnership and kind of let the rubber hit the road here, but together we have a number of joint successes. We've done the integration work, we know our solutions work fantastically together. I'd say there's some things we need to achieve and finish off on the Intel SnapConnect program where we're starting to drive and orchestrate even their internal snapshot mechanisms provide customers that consistency inside of their platform. I think there's some things that we can do to go even further inside provisioning and a number of other things. I think we should probably stay tuned, maybe around the upcoming VM world and other conference shows coming in the summer. Probably hear some more coming from this relationship and partnership. All right, Don, well, thanks very much for coming on theCUBE. Commvault, I mean, I didn't even think of you as a backup company. I think you more as a data management company and now coming into the new world of driving data value, so appreciate your time. Fantastic, thank you very much, Dave and Stu. All right, keep it right there. Buddy, Stu, Miniman and I will be back right after this word. This is theCUBE, we're live from Nutanix, next in Miami. We'll be right back.