 Live from Las Vegas, Nevada. It's theCUBE, covering EMC World 2015. Brought to you by EMC, Brocade, and VCE. Okay, welcome back everyone. We are live in Las Vegas for EMC World 2015. This is theCUBE, our flagship program. Where we go out to the events and extract the signal noise. I'm John Furrier. My name is Dave Vellante, our Nesca Steve Manley, CTO of the Core Technologies Group at EMC. Welcome back to theCUBE. Great to see you. It's good to see you guys again. How have you been? So Core Technologies, what does that mean? We just had Mike Olson on from Cloudera. We're hearing all about Flash. The infrastructure's changed and we're an information generation. Flash is dominating. What's going on? Give us a quick tutorial on what is the group going on. That new role. Right, we just dated protection before and now. What's happening? Yeah, so I think of the Core Technologies Division as the group that really is managing that big and the core infrastructure. The thing that in a lot of ways people have associated EMC with for so many years. So inside the division, you've got everything from the data protection piece to the VMAX, to the VNX, to the Extreme IO, to some of the cloud technologies that we've bought. But it really is all about, for me, more efficiently running a lot of those workloads that businesses have been built on for the last 20 plus years, but doing it better, doing it more flexibly, like you said, shifting to Flash, shifting to cloud, but really those traditional workloads. Let's break down, double down on that because I want to talk about that. We had Mike Olson talking about the whole data warehousing thing and they're trying to disrupt with their data hub, data lakes, everywhere out there. So you have existing, blocking and tackling like workloads that aren't going anywhere, right? But they need to be updated, modernized, if you will, back of recovery, et cetera, et cetera. And then you've got the shiny new toy in Flash, which we predict will be dominating most of the market share gains because of low latency data. So where is it all connected? Break it down for the users out there. So I see those workloads and the previous workloads aren't going away. What's changing about them and what's going on with the Flash piece and the new tech that they should focus on? So I think one of the things that we look at a lot is in the previous times when we've talked in the backup and the data protection space, sort of these waves of change that come through where you have your traditional infrastructure and then some of what's sweeping through there is driving a lot of services into the infrastructure rather than being the old separate buying decision where I'd buy storage and separate from that I'd buy backup software and I'd buy backup hardware and I'd buy servers and I'd buy virtualization. We see pivoting towards these new approaches to kind of converge that infrastructure together. But if you look at it, a lot of what has to happen even in that first wave of technology, that traditional storage and services, a lot of customers are just looking for, I'm spending way too much money, way too much time managing those workloads, how do I simplify? And if you go back five, seven, eight years ago, media transitions enable simplification in the backup space from tape to disk, a lot of tape to data domain. You look today, a lot of that same transformation is happening with a media shift from disk to flash. So like you said, that low latency, one of the great things for customers when they look at what flash brings to the table is, just like with disk and backup, I didn't have to worry so much about unload tape and micromanaging tape and all that. The same thing, instead of having to do a lot of work configuring disk systems and trying to balance your performance and do I have the right configuration set up, I plug in flash and you know what, it's fast. And if I need more fast, I plug in more flash and it just works. And so that's one of the big shifts I think we see is flash is not only enabling new workloads, but even on the traditional workloads, it lets you reduce that 70% investment to a much lower degree so that you can free up your time and energy to go do the kind of stuff mic talks about. But it's also enabling new work flows, particularly around data sharing. Particularly as it affects your DevOps and your cloud operations. So I wonder if you could talk about that. You guys were one of the first to really sort of harp on that, your ability to share copies, reduce the copy creep. What are you seeing from customers there? Yeah, so it's actually been an interesting thing. If you look in the industry, when deduplication first came out in backup, we said, hey, we're getting 20 to one reducing backup copies. But then you look around and you say, I still might have 12, 15, 18 copies lying around, whether it was disaster recovery or test and dev or data mining or backup or whatever it's going to be. Yeah, yeah, data, data, data, everywhere, those copies. And there were a lot of companies that tried to look and find ways of, all right, so there are these copies. Could I use them for something else? Whether it was locally. Hey, maybe I could use a snapshot or a clone. And I could do smarter things with it. Or on the backup side, I could use my backup copy and do more with it. And one of the tensions that customers always had was, but if I enable someone to start accessing that copy, that's going to take IO performance, that's going to take CPU away from the main job I'm trying to do, which either serve the primary app or run the protection. And I've got to be really careful that it doesn't impinge the primary role. With Flash, again, one of the nice things here is, I'm not fighting for disk IOPS. So if I, on my ExtremeIO system, for example, wanted to create a three or four snapshots and then use those for test and dev, I can do it because I'm not fighting for the IO. I have enough IO that I can pull that off. We see the same thing happening in the backup space more and more now too, that as the backups have gotten more efficient with these kind of the protect point things that we announced and Avamar integrations and whatnot, I have enough IO leftover that I can again use that. And so I think that's another thing that this new technology's opening up is more efficient copy reviews. So how about the customer journey? Because when we talk to little customers, we also talk to your colleagues internally about the cloud and the core architectures are changing. When customers hear, here's a new architecture. They kind of like, oh man, I got to do all this due diligence. And what we're seeing is the confluence of interoperable architectures for lack of a better description. I mean, that's my word, I just used that. Doesn't mean anything, you could have an architecture for a use case, right? And maybe it's for flash, all flash array, for X, Y, Z, you workload, and then have more of a general purpose architecture for something else. As someone grows, you can maybe implement an architecture, but when you start dealing with multiple architectures, they become problematic. Can you talk about this dynamic because there's some things changing around architectures, how do they commingle, how do they integrate, how do they seamlessly can grow? There's probably two things that I'd bring up here. One is a lot of the customers I sit with, they do have a large investment in something. Maybe it's us, maybe it's HP or whomever. And they hear about all these great new deployments they can do, and they say the same thing they've said about any massive transformation is, yes, if I were starting with a green field, if I could rip and replace everything and start over, I would do this. I have a business to run, I can't just, so a lot of what we've been focusing on is, how do you evolve a customer from where they are today, which might be a very traditional, disk-based architecture with traditional backup and tape, how do I evolve you from where you are, getting steps each step along the way that allows you to get some ROI, where you're not ripping and replacing and hoping, but each step improves your world. So the old, hey, I'll put in a hybrid flash array or I'll put in a data domain and then maybe I'll get you to an all-flash array and to an integrated backup, all those sorts of pieces. So that's one is, again, it's all about evolving because you're not going to get from A to Z in one day. So you're saying you see multiple architectures code-existing? I do, and then I think the second thing which you get to is, if you look at the way things are being built on the internet, any new thing today, it's about loosely coupled. For so many years, the right answer to any technology problem was sort of, here's a big monolithic piece of code and take it all. And then you find people have two, three-year upgrade cycles, installation cycles that by the time they're done, and they're just starting to realize the value, guess what, time to upgrade again. And what we really advocate more now is, look, you want things to be somewhat loosely coupled, the architecture of architecture. So you can say, look, this part of the environment's working okay, I don't want to modify this yet, but maybe I'll change the storage back end, or maybe for this workload, I do want some sort of cloud connectivity, or actually maybe I'm happy with my storage, but I want to move some of the data movement pieces. So loosely coupled from an integration standpoint, it's easier, but also more cohesive within the use case. Exactly, exactly. Because if you don't have that flexibility, if it's always rip and replace everything, drop a new piece in, I mean, that's just not the way the world works today. That's just good design, right? Decouple, have things loosely coupled, focus on integration and performance, and then have really highly cohesive. Okay, so now bring that back. Harder than it sounds now. Yeah, well, yeah, I'll just say, bring that back to the customer who's got, maybe he's got a VMAX, they've got a VNX, they've got data domain, they're thinking about extreme IO. So what is that architectural layer? Is it a Vplex that brings that together an abstraction layer? Talk about that. So I think there's actually a couple. So one is the VMAX itself. So I know in Guy's presentation we didn't, love to go deeper if you had a four hour keynote, but a lot of what you see out of the VMAX today is really becoming this data services platform, the VMAX 3 with the HyperMax architecture, where you can have a data domain hanging behind the VMAX for the integrated protection. You can have extreme IO behind the VMAX for the deduplicated low latency flash performance. You can have the cloud array to be able to do the tiering to the cloud, and along the way you get all the VMAX attributes, all the data services that VMAX customers have been comfortable with for years. SRDF, Time Finder, Clone, Quality of Service, non-disruptive upgrade. So for a lot of the customers that VMAX is in fact becoming sort of their storage hypervisor. Now that's not everybody, right? Other customers look and say, you know, Vplex is more the area where I want to pull that in because maybe I've got more heterogeneous storage behind me, and they look more at, let's say a Vplex with the recovery point integration, the metro point technology, so I have protection done at a higher level. And still other customers actually prefer to do it more at VMware, right? So the hypervisor's going to be my layer of virtualization. All right, so let's break this down to cost. It's cost of ownership, right? So there's a lot of competition nipping at your heels and at EMC, that might come in saying, hey, we're a point solution, and then they get a little beach head, and a little cache, looks like a threat to EMC, but then at a certain scale point, then it becomes an architectural challenge to integrate that in. You guys have a large installed base. What is the cost of ownership equation for the customer? I mean, I'll say, coming in and buying XYZ, glass drives or something else, might make sense on paper, little R&D, but what is the hidden cost piece of it? Can you share an insights around what's underneath the shark fin or the iceberg or whatever you want to use? Absolutely, I think every customer we're meeting at this point, the conversation almost universally starts with OPEX as opposed to CAPEX right now. And it's because they look at their environment and they say exactly that, that I am spending so much time and energy on just maintaining my traditional environment that no matter what you give me, no matter how good this new little tweak will be, if it means that I have to invest more in process and people to manage it, it's not worth it to me. And so a lot of our focus really is on, again, how do I simplify that down? So I mean, dipping back to the VMAX discussion, the SLO based provisioning that the VMAX does, huge win for the customers because they go from, I need to spend a lot of people provisioning my storage to it's a point and click kind of operation. Same thing with ExtremeIO, it's just fast so I don't have the complexity of provisioning. The same thing with ProtectPoint, I don't have to set up these complex backup architectures, I can just put it in as a feature of the system. So I think the cost and everybody's looking at right now because they're trying to compete with what they're hearing in the cloud. Is it integration cost or is it hidden costs or both? I think integration cost is the biggest. One of the mottos I use inside CTD and we had this in VRS and D-pad before it. Integrate, integrate, integrate, integrate. Integrate the EMC CTD portfolio together to make it simpler to manage. Tie that together with VMware to make it manageable and integrated with VMware. Tie that together for the end user applications so that they get that view. And then finally tie that to the external world like cloud so that you can leverage that as a tool in your toolkit as opposed to something that you view as a competitive threat as an IT administrator. Well, from a customer perspective, I'm hearing, take the Steven Manley vision of data protection as a service and bring it to storage as a service. Reusable storage services that are componentized, that I can invoke what I need to, it's my cost profile, I can affect my TCO. That's the vision. It's stunning. It was actually stunning to me as we dove in how similar the two segments of the market really are which is why I give credit to the EMC leadership team of figuring out what to put together in terms of CTD is that these are really the technology problems those customers are facing of, how do I streamline and get more responsive and more agile for these workloads? And the way I do that is, I bundled together the storage services which is availability, disaster recovery, data protection, archival, performance. How do I make that just a simple click of a button where regardless of what the tech is underneath, I say I need an application with this service level, I click the button, I get the application of that service level. Well, I think the organization is right. It took some time, organizations, they're important those decisions as well. Drives architecture more than architecture. Well, because you don't want to protect the future for the past from the future, right? And so that's having it all in the same spot helps you accelerate innovation, not stifle it. We got one minute left, we're getting the hook here but I want to ask you the question on data protection, you brought that up, Dave. So obviously RSA conference was just recently passed and security is the number one conversation. So take that data protection to the next level. Can we talk about all the costs of integration and it's a pain in the ass, we all know what it is. We all know threats are real, incidents are up, breaches are there. What is the security piece of it and what's the implication in kind of making this storage tough work because it's great to get the speeds and feeds of an extreme IO which you guys are expecting to be massively successful and it is. So that continues but you got the legacy stuff as well. Integration costs, you talked about security impact. How complicated does that make the equation? So I'll tell you, I mean my answer to security, there's a lot of pieces but the biggest thing I see in the market today is, and we used this last year, it's all about the metadata. Because if I look at what the team in RSA is doing, they are pulling together through a lot of their products, network-based metadata, user-based metadata. Who's doing what's going over the network? If I look at the future of what's going to happen inside CTD and other parts of EMC, it's I know the application. I can map the application to the server, I can map the user so that when something happens, I can tell you exactly who did what and when and where. Which really lets me do the forensics activity, the preventative activity to really understand what's going on. We've got customers all around the world now that are looking at things like, how do I leverage my, say, backup metadata to better manage my security? So in the wake of some of the break-ins that have happened, can my backup tell me when I'm getting an unusual number of, say, deletions or modifications on a certain dataset? Because that can very well be indicative of an attack. And so can my backup tell me if data's leaking into the wrong location? So I see EMC, that piece of RSA and all they know, and what we know about the data, if you can tie those together, I really have both sides of the coin. It also brings up the challenge around these startups getting beach head, because most of the breaches come in from backdoor, the HVAC is a great example. But it could come in from subsystems that don't integrate well, with some loose holes and some code. Yeah, it's always the layers in between. It's the glue layers that you find that little gap to drive through. A little exploit. Steven Manley, CTO of the Core Technology of an EMC, new group you're heading up, congratulations. Great to have you on theCUBE. Thanks for sharing that great knowledge. Again, getting down and dirty under the hood here inside the Cube. We'll be right back with more cinema from the noise after this short break.