 in resort in Las Vegas. It's theCUBE, covering .next conference 2016. Brought to you by Nutanix. Now here are your hosts, Dave Vellante and Stu Miniman. We're back. Chris Wallace here is the chief technical evangelist at Rubrik. Welcome back to theCUBE and a new capacity. Absolutely, last time I was here I was just an independent, you know, floating around in the Nutanix conference. Now I get to represent. Just an independent, Chris, come on. You got to keep the humbleness high and the ego low. We were, yeah, we were hanging out in Miami last year. You came on. We were having a chat about DevOps and changing cultures and things like that. And how'd you end up at Rubrik? You know, the CEO Bipple, he's a very convincing guy. Data protection didn't seem like a very sexy thing, but then I got to know the company, the culture, the product, even more so than I did prior. And it's a hard guy to say no to. I'll just say that. And since then it's been a rocket ship. I'm very happy that I said yes. So why is data protection so interesting all of a sudden? Well, the architecture historically hasn't changed in 10 to 20 years. It's been minor improvements, but it's really been this kind of blacksmith effort where you get sweaty and you have to build a small data center to back up a larger one. And frankly, it's just no fun. And the folks at Rubrik really said, what if we could wipe the slate clean, converge all of those different pieces into kind of appliance form factor, make it very software centric, and really just make it so that it's something that's really easy to deploy. It's very API centric. The user interface is actually something you want to open up. There's only one of them instead of 10 or 11 of them. And it handles most of the nuances for you automagically. So it checks all those boxes of what people that are doing data protection and doing data center architecture are looking for in this very fun, very usable type form factor and user interface. So it's a nice package. So it's kind of best practices out of the box. Imagine that if you could take all of what you wanted data protection to be and just give that to a policy, which everyone wants to be policy driven, declarative type statements, things like that. And then you have this special smart board cube kind of fabrication that's underneath it that figures out data protection, what to back up, where to put it, how often to put it in those places for you. That's fun because then you set the backups and it's all about the restores and actually building services on top of it. So let's talk a little bit more about the business problem that people have with backup. So backup historically has been a one size fits all. Here's the, you know, whatever, weekly full, daily incremental, that's it no matter what the value of your app and your data is. It's viewed as insurance. It's expensive, it's complicated. It's a zillion different interfaces, as you described. Is that, want to add anything? Pile on if you like. I think you're very well, were you a backup admin in following back up my whole career? Unfortunately. And it's nasty, right? It's a gnarly problem. And then your virtualization just makes it more complicated. My backup windows are growing. Well, my backup windows are shrinking, but I can't meet them. My RPO and our TO are shrinking and it's just hard. I'm missing backups. My backups are failing. That's key as well. You know, you brought up a lot of the pain points. The architecture really hasn't changed to address those. It's just been more about checking feature check boxes than it has been about changing the architecture flow. And so one of the points you brought up that I think is really important is, do I know if my backups have occurred? Am I spending time to fix those, to check on them? And am I able to build policy that associates the importance of taking a backup with what that workload is? Rather than that one size fits all that says, thou shall always do full backups on these days and incrementals on these other days, regardless of what the app is, we can then take policy to really associate what the intent is with that service or that application or that workload to marry those two together very finely. You know, kind of that fine grain control that admins are looking for. So, and purpose-built backup appliances were like a blunt instrument. You know, we'll put them in and it's kind of a brute force thing. And it was a one-time little step function. Okay, great, got rid of tape. But now it's data values like the same thing. I mean, the beauty of, you know, data domain was that you didn't have to change anything. But you didn't have to change anything. Okay, so along comes Rubrik. Your vision was what? To simplify backup, to protect data, to just change pain? It's really like to make backup fun again, to make it simple, to make it elegant. And primary among all those is to make the user experience something that you enjoy actually doing because a lot of the focus is going to be around restoring and going into that user interface again and again or calling the API whatever you want to do. But that process needs to be a first-class citizen in the minds of the experience for the admin, the storage admin, the user, whatever it may be. Because they're going to have to do that time and time again. And if it's very frustrating and has a lot of friction, it's not going to be something you want to do or it'll take too much operational time to do it. So the API allows me to integrate, I presume search is a factor here, other metadata capabilities? The search is a good point because a lot of folks that work at Rubrik were in the Google Facebook's type of worlds where they're building search across a very large data set. And search is what drives the entire platform. So if you're looking at the solution as a whole, the way that you find the workloads you want to work with and the data that is associated with the workload is entirely search driven. When you use Google, you're used to that kind of predictive search, you're typing along and it starts to feed in what you're looking for. It's that kind of mentality, regardless of where the data lives, which I think is another key. It's not like we can only look at the data that's local in the appliance, in the storage array, whatever it may be, and look at it, whether it's been replicated, archived, maybe it's to object or cloud or file or whatever it may be. You can look at it holistically forever and you can restore that data very simply. So even if it's in a public cloud, you can get to it very, very easily. There's not a lot of friction there either. So Chris, before you joined Rubrik, you were actually looking at the hyper-convert space quite a bit. I know at interop I went and saw your session and hyper-converged, I gave a session there too. Can you give us kind of your state? What do you think of where the hyper-converged market is and how does the Rubrik story fit into that? Oh man, watching the hyper-converged story unfold has been exciting. I've been probably following it since before it had the name, like 2011 or so, and when I went to see Nutanix and I was like, this is a crazy idea and I was really wrong. I'm very sorry about that. It'll never work. Yeah, oh, this is crazy, but that's why I don't found startups. Look at where you are now. No, it's been a joy and really interesting as a technologist to see the portfolio develop, the fact that it's really all about the software, core, providing services, linking everything with an API and allowing you to run, traditionally it was about kind of the end user compute space and making the friction of putting infrastructure into the data center much simpler. Now I see folks like Nutanix saying, well we'll even make the hypervisor kind of an abstracted piece of that story. So whether you're running on ESXi or KVM or whatever, Cropolis is also now a choice. And so that's interesting to me. And even today they're talking about using block storage as a service and allowing self-service for some of the features and functionality and the container type services. It's like, geez, what year are we in? It's like 2020 already, so it's a lot of fun to watch that come out. So when you talk to customers as the chief technical evangelist, right? What's the conversation like? So a lot of education, a lot of people haven't heard of Rubrik, right? So take us through a typical day in your life with a customer. Well, you're right about the education. A couple of things to the fact that we're turning a lot of the processes and the workflow on its head and that takes a little bit for the light bulb to go off. As an example, traditionally you would look at taking a backup by crafting a backup job. And that means declaring very specifically these workloads need to be backed up at this specific time and they go to this specific repository and that whole 11 to 15 steps it takes to get the data where it needs to go, you have to very finely express that in an imperative manner to say, do these things. And therefore you had to be an expert on the software, the backend, the database that's running it, all the repositories, 15-somehow technologies you had to master. And I tell people, well with the system, we're handling that end-to-end data protection, all you're really doing is building a policy and just saying, here's my RPO, RTO, availability and replication requirements and associate that and it'll do it. And they're like, well, which node is doing the job? I said, well, that doesn't matter. Where does the data specifically go? Like, that doesn't matter either. These are things that you don't really have to worry about, you could go down in the nerd knobs if you want. But just that idea of saying I build a policy and I associate it and it does that. And you associate it with a workload or an object within the hierarchy of an example of VMware environment. There's a hierarchy of data centers and clusters and folders and ways that you can organize the workloads and the services that are running there. And we can associate policy at any one of those levels and actually watch for changes to see if something enters or leaves the environment, we can assign those policies dynamically. And that kind of blows it away a little bit. So you have that level of granularity where you can assign it to a blob or like you say a folder. Okay, and then I assign that policy and if the characteristics of that, the data set within that entity change, you're saying you can tell the system. Yeah, you can say if something enters a folder, like a really easy example, a new workload enters or leaves a folder. What needs to happen there? Normally it's put a ticket in and tell somebody about it and blah, blah, blah. Here we're just saying we're monitoring that folder and we're saying when a workload comes in, associate policy gold with that or silver or something like that. Wherever it goes, the policy follows it. Yeah, or you can say as it moves into a different container like from one folder to the next, release the policy from the first folder and apply the policy from the second folder. So you can make it so that it floats around and that's helpful for people that are going through some kind of application development stack as it goes from one stage to the next all the way through production, the policies can float with that particular workload. There's a lot of value in abstracting that away from what's actually happening with the application. So you have to get people to do a bit flip on their process. A little bit. Is that the hardest part? It's not hard in like generating excitement about it. Yeah, I'm sure they get excited. They see like, oh wow, that's amazing. It's a little bit about, well, you know, what if? What if I do this? What if I do that? It is scary. It's a new way of thinking. In some ways you're kind of releasing control. You're saying I'm going to do this in a policy-driven declarative manner which is a little new especially in the infrastructure space. I think that development folks are very used to this kind of define something in a file or config and just let that flow. It's much different for, you know, I'm an infrastructure person myself, engineer for a long time. It's like, oh, I don't put my hands on this. I just let the policy dictate it. Okay, I need to see it a little bit first and then you see it work and you're like, oh, well now I can do much more interesting things with my time and then they're very happy. So Chris, when you engage with Nutanix customers or people that are looking at Nutanix, are they open to that? Are they looking towards that? Does Nutanix kind of prime the pump for some of these discussions? You know, how does that fit? Oh, absolutely. You know, it's, I was saying to some customers earlier, it's like you have this nice Ferrari with your Nutanix, it's next-gen, it's very scale-out, it's node-based and then you would hitch some old, archaic backup system to it. You know, you can't drive 200 miles an hour when you have a cart attached to you with some old lawnmowers behind it or something like that. So really, why not take this next-gen scale-out type of environment and associate a data protection solution that's almost the same as far as that mentality of API-driven scale-out, node-based, very simple to consume. And so the fact that you can scale both in a very, very similar manner, right? It doesn't hold you back, you can move both pieces of the puzzle at the same time. It's very digestible, I would say. People, they get the one and then the other one's not that far off. So when you guys go into accounts, what's it look like? So you get, pick your backup software, Symantec, you know, EMC, HP Data Protector, whatever it is, we all know who they are. So they're out there and they're pretty entrenched. So you go in and what? You say, sweep the floor, that stuff? Let's now re-architect your backup. Is that a typical use case or is it more, let's peel the Band-Aid off very slowly? Well, first we give them a hug and we say, we're sorry you've been abused by legacy architecture for so long. But seriously though, the approach that I find is most palatable as an architect and as someone working for a startup is that we don't have any desire really to be the end all be all solution for data production day one. That's a tough pill to swallow and we're still building out our features set, right? So we're not the everything to everyone yet. That's going to take time and there's some trust that needs to be built up. So we work really well with VMware as an example right now. We're adding feature functionality around physical SQL, physical Linux. Put us in, buy a couple appliances, start migrating the virtualized workloads over to it, build that trust, build that out and then as we add features functionality for the rest of your data center, start plugging that in through the API to the familiar user interface, et cetera and just keep kind of absorbing the rest of those use cases until we're all over the place in the data center. So it's kind of sunset the legacy, keep investing in the new. Because regardless of what protection you have, whenever you change from one vendor to the next, you're always going to have to sunset the original stuff anyways because you have probably seven years of tapes or something like that. We can reduce the TCO significantly by saying sunset a lot of the licensing or whatever it is that you're doing to maintain that legacy architecture, start introducing the new, getting all any net new workloads over to it and then just life is way better. Operationally it's much less and typically Cat-Ex is far reduced as well. So what else is under the covers? You guys maintain a catalog? Yeah, everything that comes in is fully cracked open, indexed, all the metadata is distributed, all the catalog information, like that index information is distributed and if we send that data to an object store, cloud, any kind of archive bucket, we co-locate that metadata with the data that gets archived in the case that there's a disaster and you lose the entire appliance. That way we're always backing ourselves up and if you had to do some kind of very horrible restore, we can actually re-point to the archive and restore ourselves. So it's always keeping itself backed up too. Awesome. Last question, are you finding new use cases? Whether it's in development, copy management, different ways to use that metadata, use that catalog, hybrid cloud, I mean, I get to think of several. There's a lot of ways you can slice and dice. Primarily we're trying to be laser focused on data protection, backend recovery, it's a very hot use case in and of itself, but I'm finding that, so I'm a very technical person just by nature and I wrote a lot of the modules for PowerShell Automation and kind of various scripting automation and it's interesting to see folks use that framework and build out as an example, the ability to spin up multiple copies of a workload based on a backup every night. To have a customer that each night it goes in, does what's called a live mount within Rubrik, which means they're using our storage to provision virtual machines based on a backup. They build out a couple of those, using for analytics, some dev tests, some kind of upgrade tests, things like that. It's all automated, they're spun up. At the end of the day, the script sunsets them and they're thrown away and all they're doing is tracking the deltas on Rubrik as storage and their hypervisor is compute. And so it's just neat to see people take these bits and pieces, these building blocks and say I'm going to do something that I've never been able to do with backup before and run something that's an automated to help the business do pre-production type value add on my backup server, it's pretty cool stuff. All right, Rubrik, hot company, Chris. Congratulations on the new role, thanks for coming on and explaining more detail what's going on with Rubrik. I'm my pleasure, thank you. All right, keep right there everybody. We'll be back with our next guest, day one at .next, this is theCUBE.