 Live, from Las Vegas, it's theCUBE. Covering Veritas Vision 2017, brought to you by Veritas. Welcome back to Las Vegas, everybody. This is theCUBE, the leader in live tech coverage, and we're here covering Veritas Vision 2017. The hashtag is Vitas Vision. My name is Dave Vellante, and I'm here with Stuart Miniman, my co-host. David Noy is here. He's the vice president of product management at Veritas, David. Thanks for coming on theCUBE. Oh, thanks for having me. I'm pretty excited. Yeah, so we enjoyed your keynote today, taking us through the sort of new product announcements. I mean, let's unpack it. I mean, you're at the center of it all. Actually, you know what? Let's start with, you know, the way you started your keynote is you recently left EMC. That's right. Came here. Why? Why was that? Look, I mean, I talk to you lots and lots of customers, hundreds, thousands of customers, or enterprise customers. They're all trying to solve the same kind of problems. Reducing infrastructure costs, moving to commodity-based architectures, moving to the cloud. In fact, they did move to the cloud in anger. If you look at the NAS market in 2016, it had been on a nice 2% incline. Until about the second half of 2016, it basically dove 12%. And a big part of that was enterprises who were kicking the tires, finally saying, you know what? We're going to move to cloud and actually doing it, as opposed to just talking about it. EMC and a lot of the other kind of big iron vendors, they have a strategy that they discuss around helping customers move to cloud, helping them adopt commodity. But the reality is, they make their money, those big margin points, on selling branded boxes, right? And as much as it's lip service, it's really hard to fulfill that promise when that's where you're making your revenue, you have revenue and margin targets. They're a toss of the hand. It's a software company. We're here to sell software. We're able to make your data more manageable to understand that it's a truth in information. I don't need to own every bit. And I thought that the company that can basically, A, provide the value, the real promise of what software defined offers is going to be a software company. And number two is that you can't buck the trend of the cloud. It's going to happen. And either you're in the critical path in trying to provide friction, in which case you're going to become irrelevant pretty soon, or you enable it and figure out how to partner with the cloud vendors in a non-threatening way. And I thought that Veritas, because of its heterogeneity background, hey, you want AIX, you want Linux, you want Solaris, great, we'll help you with all of those. We can do the same thing with the cloud and the cloud vendors will partner heavily with us because they love us for that reason. Before we get into the product, so let's unpack that a little bit. Why is it that as Veritas, you can participate and profit from that cloud migration? We know why you can't as a hardware vendor, because ultimately, the cloud vendor is going to be providing the box. Yeah, well the answer is that a couple of things. One is that we believe, and even the cloud vendors believe that you're going to be in a hybrid environment. If you project out for the next 10 years, it's likely that a lot of data and applications and workloads will move to cloud, but not all of them will. And you probably end up in about a 50-50 shift. And the vendor who can provide the management and intelligence and compliance capabilities and the data protection capabilities across both your on-prem and your off-prem estate as a single unified product set is going to win, in my opinion, that's number one. Number two is that, look, the cloud vendors are all great, but they specialize in different things. Some are more specialized in machine learning and some are really good with visual image recognition and some are really good with mobile applications. And people are, in my opinion, going to very quickly go to one, two, three, four different clouds, just like I would go to contracting agencies. Some might be good at giving me engineers. I might go to dice.com for engineers. I might go to something completely different for finance people. And you're going to use the best of breed clouds for specific applications. Being able to actually aggregate what you have in your universe of multi-cloud and your hybrid environment, and then allowing you to be, as an administrator, aware of all my assets, is something that, as a non-branded box pusher, as a software vendor, I can go do with credibility. You're a recovering box pusher. I'm a recovering box pusher. I'm one month into recovery, so thank you very much. David, one of the things we're trying to understand a little bit is you've got products that live in lots of these environments. Why do you have visibility into the data? Is it because they're backup customers? Is it other pieces? Help us understand, in that multi-cloud world, what I need to be to get that full and... So that's a great question, and I'll bridge into some of the new products, too. So number one is that Veritas has a huge amount of data that's basically trapped in repositories because we do provide backup for the large backup vendor. And so we have all this data that's sitting inactive. Mike talks about it. Mike Palmer, our CPO talks about it as kind of like the Uber. What do you do with your car when it's not being used, or Airbnb, if you will? What do you do at home when it's not being used? Is you potentially rent it out, right? You make it available for other purposes. And with all this trapped data, there's tons of information that we could glean that enterprises have been gathering for years and years and years. So that's number one, we're in a great position because we hold a lot of that data. Now, we have products that have the capabilities through classification engines, through engines that we're extending, machine learning and capabilities to open that data up and actually figure out what's inside. Now we can do that with backup products, but let's face it, data is stored in a number of different other modalities, right? So there's block data that's sitting at the bottom of containerized public cloud or private clouds. There are tons and tons of unstructured data sitting in NAS repositories and growing off-prem, but even actually on-prem, this object storage technology for the set it and forget it long-term retention. All of that data has hidden information, all of it can be extracted for more value. With our same classification engines that we can run against in that backup estate, we can basically take that and extend that into these new modalities and actually have compelling products that are not just offering infrastructure, but offering infrastructure with the promise of making that data more valuable. Make sense? It does, I mean, it's the holy grail of backup, right? For years it's been insurance and insurance is a good business, don't get me wrong. But even when you think about information governance and through Sarbanes-Oxley and FRCP, et cetera, it was always that desire to turn that corpus of data into something more valuable than just insurance. It feels like, like you're saying, automated classification, machine learning AI, we're sort of at the cusp of that, but we've been disappointed so many times. What gives you confidence that this time it'll stick? Look, I mean, there's some very straightforward things that are happening, right? That you just cannot ignore. GDPR is one that there's a specific timeline, specific rules, specific regulatory requirements that have to be met. That one's a no-brainer and that will drive people to understand that, hey, when they apply our policies against the data that they have, they'll be able to extract value. That'll be one of many, but that's an extreme proof point because there's no getting around it. There's no interpretation of that and the date is a hard date. Now, what we'll do is we'll very quickly look at other verticals. We'll look at vertical specific data, whether it's in beta surveillance or in genomic sequencing or what have you, and we'll look at what can we extract there and we'll partner with ISVs. This is a strategy that I learned in my past life in order to actually bring to the market systems or solutions that can categorize specific vertical and industry data to provide value back to the end users. If we just try to provide a blanket, hey, I'm just going to provide data categorization, it's a Swiss Army knife solution. If we get hyper focused around specific use cases, workloads and industries, now we can be very targeted to what the end users care about. All right, and if I heard right, it's not just for backup, it's primary and secondary data that you're helping to solve and leverage and put intelligence into these products. That's right, I mean, initially we have an enormous trap pool of secondary data, so that's great. We want to turn that trap pool from just basically a stagnant pool into something that you can actually get value out of. That walking dead analogy used. The walking dead, yeah. But we want to turn, we also say that, there's a lot of data that sits in primary storage and in fact there's a huge category of archive which is called active archive. It's not really archive, still want it on spinning disk or flash, you still want to use it for some purpose, but what happens when that data goes out into an environment? I talk to customers in automotive, for example, automotive design manufacturers, they do simulations and they're consuming storage and capacity all the time. They've got all of these runs and they're overrunning their budget for storage. They have no idea what those runs they can actually delete, they create policies like, well, if it hasn't been touched in 90 days, I'll delete it. Just because it hasn't been touched in 90 days, doesn't mean there wasn't good information to be gleaned out of that particular simulation run, right? All right, so I want to get back to the object, but before we go deeper there, block and file, there's market leaders out there. It seems that it's a bit entrenched, if you will. What between the Hyperscale product and Veritas access, what's the opportunity that you see that Veritas has there? What differentiates you? Sure, well, let's start with block. The one big differentiator we'll have in block storage is that it's not just about providing software-defined storage to containerized applications. We want to be able to provide machine learning capabilities to where we can actually optimize the IO path for quality of service. Then we also want to be able to, through machine learning, determine whether, if you decided to run your business, you want to burst workloads actually out into the cloud. So we're a partner with the cloud vendors, so we're happy to partner with the reasons that I described earlier is that we're very vendor agnostic, we're very heterogeneous. To actually move workloads on-prem and off-prem, that's a very differentiated capability. You see, with a few of the vendors that are out there, I think Nutanix, for example, can do that, but it's not something that everyone's going after because they want to keep their workloads in their environment, so they want to take controls. And that, if I can, that high-speed data mover is your IP? That's right, that's our IP. Now, on the file system side. Just one, cloud bursting is one of those things. Moving real-time is difficult, physics is still a challenge for us, any specifics you can give us, kind of customer use case, where are they doing that? A lot of times, I want this piece of the application here, I want to store the data there, but real-time, doing things, I can't move massive amounts of data just because speed of light. So I think if you break it down, I don't think that we're going to solve the use case of, I'm going to snap my finger, I'm going to move the workload immediately offline. Essentially what we'll do is we'll sync the data in the background, once the data has been synced, we'll actually be able to actually move the application offline, and that'll all come down to, one of two things, right? Either use cases that exceed the capabilities of the current infrastructure, and I want to be able to continue to grow without building them into my data center. Or, if I have a end of the month processing, a great use case is actually a media entertainment company I used to work with, who was working on a film, and it became close to the actual release date of that film, and they were asked to go back and re-cut and re-edit that film for specific reasons, a pretty interesting reason, actually, what we have to do with a government pressure. And when they went to go back and edit that film, they essentially hit a point where, oh my gosh, all of the servers that were dedicated to render for this film have been moved off onto another project. What do we do now, right? Well, the answer is, you got to burst, right? And if you had cloud burst capabilities, you could actually use whatever application in the containerize, whether you're going to get on-prem or off-prem, it doesn't matter, it's containerized, if we can get the data out there into the cloud through fast pipes, then basically you can now finish that job without having to take all those servers back or repurchase that much infrastructure. So I think that's a pretty cool use case, that's things that people have been talking about doing, but nobody's really ever successfully done, we are starting to prove that out with some vendors and with some partners that potentially even want to embed this in their own solutions, larger technology partners. Now you were talking about file as well, right? And what makes file different? You know, I spent five years with one of the most successful scale-up file systems, you probably know who they are, but the thing about them was that extracting that file system out of the box and making it available as a software solution that you can layer on any hardware is really hard, because you become so addicted to the way that the behavior of the underlying infrastructure, the behavior of the drives, down to the smart errors that come off the drives, you're so tied into that, which is great because you build a very high performance available product when you do that, but the moment you try to go to any sort of commodity hardware, suddenly things start to fall apart. We can do that and in fact with our file system we're not saying that hey, you've got to go put it on commodity servers and with DAS drives in them, you could layer it on top of your existing NetApp, your Isilon, your whatever you name it, your VNX encapsulate it and create policies to move data back and forth between those systems or potentially even provision them out, say okay, you know what, this is my gold tier, my silver tier, my bronze tier, we can even encapsulate for example, a directory on one file service like a one file system array and we can actually migrate that data into an object service, whether it's on-prem or off-prem and then provide the same user NFS or SMB connectivity back into that data, for example, a home directory migration use case moving off of a NAS filer onto an object store on-premise or off-premise and to the end user, they don't know that things have actually moved so we think that kind of capability is really critical because again, we're not trying, we would love to sell boxes if that's what the customer wants to buy from us and a plan's form factor but we're not pushing the boxes the ultimate endpoint, the ultimate endpoint is that software layer on top and that's where the Veritas DNA really shines. That's interesting, I mean the traditional use cases for blocks certainly and maybe to a lesser extent file have been historically fairly well known and understood so to your point, you could tune an array specifically for those use cases in this day and age, the processes and the new business models that are emerging in the digital economy very unpredictable in terms of the infrastructure requirements and so your argument is a true software defined capability is going to allow you to adapt much more freely and quickly. And so we've also built and we've demoed at Vision this week machine learning capabilities to actually go and look at your workloads that are running against those underlying infrastructure and tell you are they correctly positioned or not. Oh, guess what? We really don't think this workload should belong on this particular tier that you've chosen. Maybe you ought to consider moving it over here and that's something that historically has been kind of the responsibility to admin to be able to figure out what those policies are and try to make some intelligent decisions but usually those decisions are not super intelligent they're just like, is it old? Is it not old? Do I think it's going to be faster? But I don't really know until runtime based on actual access patterns whether it's going to be high performance or not whether it's going to require moving or aging or not and by using machine learning type of algorithms we can actually look at the data and the access patterns over time and help administrators make that decision. I see it. Okay, we're out of time with just to summarize. So hyperscales, the block access is the scale out, NAS piece, cloud object is- Veritas cloud storage, we call it and Veritas cloud storage very similar to the access product is for object storage, but again, it's not trying to own the entire object bits, if you will will happily be the broker and the asset management tool for those objects classify them and maintain the metadata catalog because we think that's the metadata around the data that's critical whether it lives off-prem, on-prem or in our own appliance. And you had a nice, you know, X, Y graph dollars on the vertical axis, high frequency of access to the left part of the horizontal axis, lower SLAs to the right and you had sort of block file object is kind of the way, you know, to look at the world. And then you talked about the sort of the intelligence that you bring to the object world. Last question is, let's end there. Thoughts on object? Stu and I were talking off-camera. I mean, it's taken a long time. Obviously, S3 and the cloud guys have been there. You've seen some takeouts of object, you know, storage companies, but it really hasn't exploded, but it feels like we're on the cusp. What's your observation about object? I think object is absolutely on the cusp. People have put it in the public cloud because traditionally, object has been used for cheap and deep and where performance doesn't matter and the deeper you get, the less expensive it gets. So a cloud preventer is great because they're going to aggregate capacity across like 1,000 or 20,000 or a million customers. They can get as deep as possible and they can slice it off to you. As a single enterprise, I can never get as deep as a cloud service provider. Not at the volume, right? But what ends up happening is that more and more workloads are not expecting to hold a connection open to their data source. They're actually looking at, you know, packetize, get put type semantics that have, you can see in genomic sequencing, you see it in a number of different workloads where that kind of semantic, even in Hadoop analytic workloads, where that kind of just get put semantic makes sense, not holding a connection open and objects perfect for that, but it hasn't traditionally had the performance to be able to do that really well. We think that by providing a high performance object system that also has the intelligence to do that data classification, ties into our data protection products, provides the actionable information and metadata and also makes it possible to use on-prem infrastructure as well as push to cloud or multi-cloud and maintain that single pane of glass for that asset management for the objects is really critical. And again, it's the software that matters, the intelligence that we build into it that matters. I think that the primary workloads in a number of different industries and verticals are adopting object more and more and that's going to drive more on-premise growth of object. By the way, if you look at the NAS market and the object market, you see the NAS market kind of doing this and you see the object market kind of doing this. It's left pocket right pocket. And that get put framework is just a simplifying fact there for organizations, so. Excellent, all right, David, thank you very much for coming on theCUBE. Appreciate it. Appreciate it, thanks for having me. You're welcome. All right, bringing you the truth from Veritas Vision. This is theCUBE, we'll be right back right after this short break.