 The Cube at EMC World 2014 is brought to you by EMC. Redefine VCE, innovating the world's first converged infrastructure solution for private cloud computing. Brocade, say goodbye to the status quo and hello to Brocade. Okay, we're back here live in Las Vegas at EMC World 2014. This is the Cube, our flagship program. We go out to the events, extract the symbols and the symbols and the symbols. I'm John Furrier, the founder of SiliconANGEL. I'm joined by co-host Dave Vellante, co-founder of Wikibon.org. And our next guest is the president of Advanced Software Division. I'm Atab Shirvastava. Welcome back to the Cube alumni. Did I get that right? Yes, absolutely, thank you very much. I was practicing the whole time. So I got to say last year we had a very memorable conversation here in the Cube and we were totally geeking out on the software-led infrastructure, software-defined enterprise. Essentially, the stuff we talked about was on the keynote. Pretty much Joe Tucci's messaging. Yes, absolutely. That means you're in the right spot within EMC. What's next year's keynote gonna be? Let's talk about that, okay? Fast forward, we're gonna talk about software, all flash arrays in line, got a million dollar guarantee, Vipers hitting the scene, open stacks on the round-the-couraging, Cloud Foundry's out there. The software layer moving up the stack seems to be the battleground. What's your take on that? So I agree with it that the lot more intelligence is moving at the software layer. At the lower level, I think the world is still gonna get more and more complex. And the one who makes it simple wins. That's the element that we are going after. So like we talked last time, we thought that our view was that the storage infrastructure is gonna consist of hardware arrays, I mean EMC arrays, non-EMC arrays, and commodity arrays. I mean that's the, now innovation is still gonna happen and new elements are gonna come in with flash and SSDs and now you saw the new acquisition that we have done. A lot more, a lot of these different innovations are gonna happen, which essentially means that you can come up with more and more powerful hardware. But the software now gets to utilize the power that the hardware has got and make it more and more applicable for the various workloads that are there. So I got to ask you, one of the things we've been observing this past year, one of the trends that Dave and I kind of see coming out, kind of had the epiphany at the Apple, 30th anniversary of the Mac in Cupertino. I attended, a lot of my friends were on the team and even though I'm a little bit older than I am, but it was really a geek celebration from the old tinkerers, the old hardware guys, the homebrew computer club. We're seeing a shift now in today's market where a maker markets the homebrew-like tinker or hardware's back, open compute, the data center builds your own, and you mentioned things are getting complex. How does that trend of the maker movement intersect with some of those low-level complexities and how does software solve that? You're seeing Raspberry Pi, people are hacking drones, you're seeing some coolness around hardware geeks, but you don't need a friend who works at Intel to get stuff. Yes. So how do you make sense? So I think the key thing that the industry has to make sure is that the applications don't get affected by all the tinkering that is happening on the hardware side. Because, I mean, if you look at it on the compute side, that's what happened with virtualization of compute, right? So the abstraction layer needs to be coming in so that you have to separate the applications from the hardware layer. And that's why the layers like Viper starts becoming very important because they can separate the software layer from the hardware layer. So while these tinkering things come out and certain applications are going to run faster just because they're more suitable for the workloads, applications don't fundamentally need to sort of change there. So that's the only way the simplification is going to happen from my perspective. So somebody made the point that it might have been you. I don't know if it was an analyst meeting or a keynote, they all blend together, right? But somebody said enterprise applications really haven't changed much over the last several decades. In fact, I think it was Mark Hurd at Oracle Open World. So the average age of an enterprise app is 19 years, right? Now, but the infrastructure is starting to change dramatically. You see the hyperscale stuff start entering the enterprise. That's your world, right? Well, we see a similar transformation of applications. You see applications popping up all over the place. You get zillions of applications on your phone and you could download them. Will enterprise apps start to look like consumer apps? If you're today in my keynote, I was comparing two parts. There were sort of companies which were really were born in platform two, but are moving to platform three. But if you look at the companies which are now getting born on platform three, they have no restrictions. And if you look at it, they are coming up with these models we have never thought before. And I had an example of Andy Amill, which was a startup in Shoreditch in London. They were doing it. They're using a completely different model. There's another startup we were talking about which doesn't use any storage. It only uses compute. So the traditional architectures that we were talking about are now going to go away. And people are utilizing the fact that they say, OK, how will I write my app if I was guaranteed that there is hyperscale compute, hyperscale storage, hyperscale, everything is available to me. How will I construct and rewrite this app? How will I design this app? And I think that's the phenomenon which are going to be happening. But these are going to be those applications which were born on platform three. And I think the startups are going to be the first place that you're going to see all of these things happening. And ultimately, right now the enterprises are first moving to platform three. Once that problem is solved, I think you'll start seeing the fundamental changes in these applications. Which is virtually unlimited resources. Absolutely, but you do so. And I would just put out there no spinning disk latency. Even though your forecast and IDC's forecast to say a very small portion of the market will be flashed, I think where the value is is going to be a lot higher. I think with all of these things it's going to be this tier thing. And so depending upon the applications that you're in, the cost of flashed, even if it comes down, even for the cheaper disk you can get very far. And so depending upon how cost sensitive you are or how performance sensitive you are, there's a place for all of these different things. Courses for courses. Yeah, exactly, exactly. So I've got to ask you about some of the trends around OpenStack and the cloud and all these things that are coming out of the pie. How do you see that affecting some of the storage architectures? Because is it going to be attracted to the way? What's your take on that? So if you look at it, everyone is trying on the storage architectures, even if you look at OpenStack. What is it doing? It's at the simplest level. What it's trying to do is, is there a uniform way to talk to all the arrays. Today, every array talks differently. Somebody talks with SMIS. Somebody talks with CLIs. Somebody talks with REST APIs. So if you look at even the Cinter plugins that the OpenStack has, what they're saying is, OK, if everybody writes a Cinter, uses the Cinter plug, builds a Cinter plugin, then people can use the Cinter APIs to talk to all the arrays. So everyone is going for more and more in the direction of standardizing and talking to these different arrays with different characteristics coming in in a uniform way. Viper takes it a step further than that. It just says, no, no, I don't need only need a uniform way to talk about it. How about if I virtualize it? How about if I pull all of those things out and then you can now carve it into these virtual resources there? And I think that's the layer I was talking about, is as this layer gets more and more abstracted and more powerful, that's where the separation between the applications and the hardware is. Do people ask an obvious question there? Is that layer, does that layer cause overhead? Does it add inefficiencies? Yes. What do you respond to? In my opinion, that's why if you look at how we approach it, that's a very valid question. And I think that was the reason why storage is the last to get virtualized. And that's why the way we approach it is by splitting the control and the data plane. So you can get all the virtualization, all the abstraction, all the management, and a lot of the things that they're talking about, just by being in the control plane without affecting the data plane out. So you do not impact at all the performance of any of these arrays or the key value that they provide. Because if you paid a lot of money buying a VMAX and you slowed it down, that product's not going to sell. And so that's why if you let Wiper do it, Wiper does the majority of the stuff in the control plane side. It's very opportunistic on the data plane side. So it's in the data path for object, HDFS. But for block, it stays away. So when I talked to a lot of EMC customers over the years, the number one complaint is always, oh, I got so many platforms. But that really is the only big complaint that you hear. And Wiper is a way to consolidate that and normalize it. And I can see, and we're going to have Brian Gallagher on tomorrow, but I can see Symmetrics living in that or VMAX living in that environment for a long, long time. It's going to be a long time before your stack matches is absolutely, maybe not in our lifetime. I don't know, maybe, maybe. But the rest of the portfolio, maybe with some exceptions, it looks like Wiper, the stack that you're building from scratch, you told us last year, you had a pretty big tan to go after. Yeah, I think so. I think it's a, if you look at it, the cloud market, the hyperscale market that was there, EMC didn't play much in that market. I mean, this is the market where the public clouds were playing in. And mostly that is the one which was sort of like coming out with commodity hardware, the startups and this were playing in. And that was a total big market that was sort of like going in. Our focus is like with ECS and now with Wiper with commodity, right? That's the market that we are really trying to go to. And that I believe is a growth market for EMC because that expands my EMC's reach. EMC was very strong in the traditional market that was there, and I don't think that's gonna change. With Wiper and things like that, we can help that process out. But this new market, which is emerging, which is growing at a much, much faster rate, now we believe we have some of very crucial technologies with ECS and now Wiper with commodity that we can now play with that thing. You mentioned ECS. How's your product manager doing there? Dave Goulden tells us he's the product manager for ECS. He's like, good. Was he the product manager? He was the product manager for ECS and we kept on... Did he write PRDs? I wanted to know. Scold him, we're gonna call you out on that. We're gonna come see you at the Circle Bar later. We're gonna say, you never wrote a PRD. And we had to say, David, now we're gonna take your product management away because I have to actually ship this product. Thanks for the budget. See you later. And so he said, well, at least I have to meet the product manager who's gonna take my job away and that product manager was always watched very carefully. Up to a warm in advance, he was a kid right at school. Up to a warm in advance, he was a kid right at school. I wanna ask you about a trend that we're seeing. So in the early days of the hyperscale guys, it was the assumption was commodity hardware and then you're gonna layer the software defined in the data center on top of it. And now when you talk to the hyperscale guys the same, well, actually we kind of did a 180. We're doing highly customized hardware. We're doing compute that is super dense. You can't buy in the open market. The networking pieces are getting more and more customized. Do you see, what do you make of that trend? I think there's gonna be a combination. So for example, if you look at ECS today, okay, the ECS that we are gonna ship today, it's totally built with commodity hardware, right? But it's really designed for the cloud storage workloads. Now, this literally, if you look at the appliance, we've got two network switches there, we've got two servers, we've got all these disks which are there, but we were optimizing for cost, right? So you made it the densest possible things to do that. Now if you asked me to go build this appliance for a very different workload, I may have designed it very differently. I may have put all flash arrays there. So depending upon which workloads that you are talking about, I think the hardware is gonna go evolve. And that's why I'm much more of a believer that it's not all commodity or all specialized. Everybody figures out a sweet spot, and that's why I was making that comment that the world is gonna get more complex, right? And you have to use the software as a mechanism of making it simpler. So I guess I didn't formulate my question well, and I think you just answered it, but you don't see the lack of specialized hardware in the enterprise as a problem because essentially the hardware market hates a vacuum, and they'll always fill it. That's right, they will fill it, and they will come back. I mean there's so much, I mean like DSSDs, right? I mean it's such a critical technology, right? It will have certain workloads where it'll just completely blow it away. So if you were designing a specialized array for certain workloads, I can configure it in a very, very different way. But people will also like to just buy commodity hardware from price and just put the software on it and go build a data center. And that's okay too, but on the other hand if you're really running HANA or SAP or Oracle and you really need a very, very specialized thing there, maybe for a mission critical application it's actually okay to spend a little bit, a few more dollars and buy more expensive hardware. So I think it'll all balance off. The key thing is that the complexity that we are introducing now cannot, this will fail, is if the complexity gets visible to the operations or to the management of those things there. And that software has to automate that whole process out. If that is achievable, then you can have as much complexity downstairs. It doesn't matter. Applications still go. I want to have a last question. We're going to break here. We're going to start to wind down our day here. But I want to ask you more of a computer science question as the world is growing and changing. Society's impacted, got wearable computers in the top of the stack, data centers through storage, cold storage, warm storage, hot storage, whatever you want to call it. Computer science is changing. What do you see as trends that you're looking at from a computer science standpoint that have your attention? The kind of candidates you guys are looking to hire because you're seeing with Viper, you're seeing geo-distribution, you're seeing protection across multi-site, all that stuff that is now going to be very much one global data center from an object standpoint. You can block the other things. This is now starting to get into multiple sciences that are intersecting and that's all the rage right now in the computer science program. We'll leave this one. What trends in computer science do you like right now that you're watching? So the biggest one I believe that the, of course my focus is much more on the software side. It's all on distributed computing. Distributed computing is hard. And in a majority of us, when I grew up or when we grew up, nobody told us how to write distributed programs. I mean, you write stateless programs. You write, how do you do distributed debugging? All of this, and it's really hard when a data center goes down, you will not imagine how poor debugging tools are in distributed debugging. Logs and all of that stuff doesn't sort of like work out. So the advancements in this whole thing, like compute is not gonna just work on a single processor or one machine, it's gonna be distributed over thousands of nodes, multiple of data centers going out. How do you design applications like that? This is the one we are talking about, the applications were designed 19 years ago. Now if you come back and you go back and say, I've got infinite compute distributed all over the world, how would you go configure this application? You have to design the application, how will it be managed? How will it be debug all of those aspects that are coming in? And I think the fundamentals of computer science have to give the graduates that are coming in very, very strong foundations on all aspects of distributed computing. And you got real time too, that makes a big dimension in things. Oh, yes. And you got the geography issue, so you got virtualization, it's actually a good time. It's actually intoxicating at some level at a computer science level. A lot of opportunity. Development renaissance is coming. Oh, absolutely, I think right now the industry, in my opinion, is at an inflection point. Because before there was an argument whether how the clouds will take it or not or take it or who will get it or is it just the public clouds or not. But now everybody's accepting that fact, that's what we were trying to do. Hyperscale, massive scale, it's not limited to these big public clouds or these big service providers. Every enterprise is going to have it. Who's going to write applications for them? Who's going to maintain them? And so these fundamental things are going to be spared all over the industry. You were just talking to Dave and I were just talking to people who follow our crowd chat beta, looks like a little chat client but it's actually really distributed, really real time through the firewalls, using Node.js with all this kind of real time on a DevOps stack. Okay, it's difficult, it's really, it's not trivial. That's right. It's not just load up a Rails on a MongoDB and I'm done. This is where, of course, I'm out. I've not really kept track of exactly what the curriculum for different universities are in there and not to make a decision. But that's where, in my opinion, the focus has to change, adapt to where the industry is going, of how the next set of computers are going to be. As Jeremy Burton says, don't go against fashion. That's right. And the fashion right now is big data cloud, large scale, distributed large scale. And the management piece is very much a big part of it. Yes. Thank you so much for coming on the queue. Always great to talk to you. I think that's going to be the keynote next year. Large scale. Joe Tucci, we'll get a quick write up for you. Okay, we'll be right back. Thanks for coming on the queue. Thank you very much. Great to have a great conversation. Always talking computer science here in the side of the queue. We'll be right back after this short break.