 Welcome back to theCUBE, this is EMC World, Exclusive Coverage from SiliconANGLE and Wikibon. This is our flagship program theCUBE, where we go out to the advanced extracted signal from the noise, we talk to the smart folks we can find, entrepreneurs, executives, whoever's out there that has that signal, we want to extract that from them and share that data with you. I'm John Furrier, the founder of SiliconANGLE, and I'm joined by my co-host. Hi everybody, I'm Dave Vellante of wikibon.org. John Rose is here, he's Senior Vice President and CTO of EMC. John is six months on the job, threw you right in, welcome to theCUBE. Yeah, great to be here. Yeah, so we met in Hockington about a couple months ago. So we talked about a lot of the things, but you had to be a little bit vague, but we now see them come to fruition here. You're joining EMC at a very exciting time and you come with quite a background. Talk about that a little bit, you come from Huawei and you're not a storage guy. Yeah. You're at a storage company, how's that all work? Yeah, in full disclosure, while I have built storage systems, you know, historically I've lived my life in the carrier ecosystem, the network infrastructure ecosystem, the Silicon ecosystem, the real-time ecosystem. And for context, I was the CTO of Cabeltron and Terraces, Broadcom, Nordtale, and then ran Advanced Technology at Huawei, which pretty much covered everything. And so interesting enough when I got a call from EMC saying, hey, we're looking for a new CTO, we're kind of rethinking the role, my first question was, okay, you understand I'm not a storage guy, even though I understand your space. And the comment back to me was, well, EMC is not entirely a storage company going forward. We're actually many, many more things and we have to be able to navigate a much more complex ecosystem. And so after the back and forth, when we came to a conclusion that it seemed like a pretty interesting environment, my conclusion was the amount of resources available at EMC and the technology they've accumulated over the last, let's say, decade are incredibly relevant to the next decade of the industry. So as a technologist wanting to build things, a lot of tools here to work with. So talk about the big mega trade. We always talk about convergence here and industries converge. They have, ever since I've been in the industry, what are the big convergence trends you're seeing in particular around consumer, the enterprise, and pieces of your background? Yeah, I mean convergence and divergence happens all the time, but the biggest of the big convergences that are happening right now is actually at the ecosystem level and the industry level. And I've been talking about this for many years in different industries, but now it's really accelerating. And that is that we used to think about consumer enterprise and carrier as three very, very different domains that you had different vendors, different solutions, different technologies. And now more and more when we look around what we're realizing is these things are bumping into each other everywhere. If we think about the hybrid cloud, it is impossible to build a hybrid cloud purely based on enterprise technology because half of the hybrid cloud is living in the service provider world. And it's the intersection between them is really where the action's happening and quite frankly to navigate it, you have to actually think about both sides. The biggest challenge in enterprise IT that's really kind of keeping CIOs up at night is kind of a macro trend called the consumerization of IT. This idea that their end users are gravitating towards using consumer technology for enterprise purposes, at the same time they're realizing, well, if I take advantage of that and I use mobile devices that come out of Apple and Samsung, if I use analytic tools that maybe originated out of the Googles and Facebooks of the world, I can actually run my enterprise better. Now, while that sounds very good, it's a huge opportunity to take advantage of that convergence, but it's somewhat unbalancing for the people involved in it because you have to learn new technologies, you have to think about new ecosystems, and as an IT professional, you actually have to think about incorporating systems into your IT strategy that you can't own, you can't deploy on-premise, you can't control the vendor ecosystem and your customers and end users are absolutely going to demand that they're going to use them whether you like it or not. So, it's a little bit interesting. So, I mean, obviously you're describing the transformation chasm, the problem, its reconstruction, its investment. So, it's classic legacy. Do you tear down and rebuild? Do you try to retrofit some things? I mean, these are all hand-waving whiteboard-like conversations, but the reality is that there's some serious work involved. So, let's take through that. So, what are the key challenges that you see? I mean, everyone loves the modularization. Hey, decouple this, decouple the data layer, decouple the control layer, this is good messaging. At the end of the day, how does that get done? Yeah, well, first of all, I mean, I lived most of my career in the enterprise environment. At one point, I was the CIO of a half a billion dollar company and so I had to live on that side of the house. Enterprises are evolutionary, not revolutionary. I mean, they might offend some people by me saying that, but that's the reality. It's not a green field. You know, I remember a CIO telling me one time that, you know, when you adopt new technology as a CIO, there are only two possible outcomes for you personally. One is it's wildly successful and everybody forgets who did it and, you know, you don't really get a lot of credit for it. And the other is that it fails and you're the former CIO. And so there's a conservative nature to enterprise IT by design. That being said, there is no shortage of understanding that these new technologies and tools are inevitabilities. They are useful and we need to go navigate them. So, the term I like to use in terms of managing this transition is somewhat, it maybe sounds like an oxymoron, but it's disruptive evolution. It's this idea that you really have to evolve forward, which means you pull your existing technology into the new world and make it compatible with the old world and you actually have a hybrid system. But you also have an open mind to the fact that the new technologies may inevitably disrupt some of your thinking and operating model. So, a great example will be when you mobilized your enterprise from a storage perspective, probably the first thought in most enterprises is, where am I going to store all the information? Well, I'll put it in my existing storage systems. That's great, you could do that, except most of those mobile applications require S3 or object store interfaces or restful APIs. So, you could throw out your entire enterprise IT infrastructure and just move to an S3 environment, not really likely to happen because there's a huge dependency on it or you could do a disruptive evolution in which you bring a new technology that is absolutely going to take over certain workloads that you used to run in this other environment and move them to this new part of the infrastructure, but put them in an overall coherent framework where they actually are just new pieces of a comprehensive infrastructure that you support your enterprise on. And the advantage of doing that is you've disrupted yourself by adding a whole new class of storage and information assets, but now you've put it into a framework which says, going forward, the next mobile application challenge you have, your infrastructure is now expansive enough to accommodate it in the kind of evolutionary model that you've been historically dealing with as you add new workloads and new applications. That's a good point, no one's going to do a rapid rip and replace, it's just kind of impossible. Absolutely. But I want to build on that and just kind of get a philosophy mindset from you as you look at the new horizon of the modern era, call it the modern era, if you want to use a baseball term, post-eroids era, whatever you want to call it. So we asked Pat Gelsinger two years ago on theCUBE when he said, BMC, Pat, what comes first? The infrastructure and the apps, what drives the innovation? Because it used to be traditionally, infrastructure enabled stuff on top to happen, but yet with the consumerization, you've seen the pressure come from the other side. So open in choice and development, all the stuff they're talking about here. So he said infrastructure, but now we're hearing it's happening the other way. Okay, cool. So that's, I want to get your perspective on kind of that philosophy, but also more importantly, the issue about incrementalism, right? So disruptive innovation, as you mentioned, is totally cool. But what comes first? Tooling of the platform, because in this rapid era of deployment, shadow IT, little incremental tests and improvements, what comes first? Do I fix the tooling first or do I go down on the platform? Okay, well, two questions there. Let's take the first one first. You know, does the application come first or does the infrastructure come first? It answers, typically, most enterprises get catalyzed to do something new in their infrastructure because of a new application or a new use case, generally one they didn't anticipate. I've rarely seen an enterprise build out infrastructure in advance of some demand for it because we just simply don't have the IT budgets to do that. So more and more now, what we're saying is a massive set of new applications, especially around mobilization, consumerization, big data, and those are driving a thinking about my evolution of my infrastructure. Now, a well-thought-out infrastructure, however, says that application I see is not a distinct independent entity. It is an indicator of a trend. The first mobile application that your marketing people ask for should not be thought of as the only mobile application. It should be an early indicator that there is a new class of service that your infrastructure has to provide. And so if you manage it properly and don't build for the one-off application, but you do things like expand your storage architecture to include block, file, and object, for instance, in a coherent way, then what you've done is built a platform so that the next mobile application isn't a radical evolution or disruption. It's just a new workload on an infrastructure that's now prepared for it. But I kind of disagree with Pat in the sense that you build infrastructure proactively without knowing where it's going to be used. You typically build it because there's a new condition that has to be solved for, but if you build it correctly, then that trend gets handled by an infrastructure that's expanding. What you're saying is, is that, if I can just take liberty here, is it's easier to build a platform today than it was in the past in the sense of to get something going, that's extensible. There's less risk. Is that what you're trying to get into? Well, actually I'm not. That's kind of your second question. It is not easy to build platforms today. In fact, platforms are actually highly fragmented and somewhat incoherent actually. And so what you've heard from EMC this week is that while we're not in the end-to-end vertically integrated platform business, we like an open architecture, what you're starting to see is more and more technology from EMC materialized that is less a vertical specific function like block storage or file storage or object storage or backup and recovery or sync and share. You're now starting to see horizontals that are layers of an enterprise platform. Viper being a great example. Now why did we do Viper? Not because we wanted another product, but because we realized that you now have a very complex set of services that are actually going to serve an even more complex set of use cases as they evolve. And we needed to put something that could actually abstract the complexity between those two domains. You know, think of it as kind of an hour last. And decouple them at the same time. Decouple them, abstract them and make sure that your applications don't have to understand the intimate details of the evolution of your infrastructure and your infrastructure doesn't have to be built or rebuilt every time an application evolves or changes. And so putting that software-defined storage layer between the two suddenly changes the operational characteristics. I hate to describe it as a bit of a mask because quite frankly, your applications don't want to know what kind of infrastructure they're running on. They just want a set of services. And your infrastructure, it really isn't, it's suboptimal if it's designed to only serve a specific application because the number of workloads and applications and enterprises, and here's a prediction for you, is going to get bigger as we go forward, not smaller. Okay, so let's unpack that a little bit and talk about the future of Viper and this whole notion of software-defined storage. So today, EMC's a company, got a lot of controllers. If there's an API out there, you got a controller for it. And so now the rationalization of that is, hey, look, different workloads require different purpose-built solutions and you've got a great job, high-end, low-end, blog, file, et cetera. And David Goulden often makes the point that one size doesn't fit all. Viper, in a way, as a platform, which is really hard to build, isn't it kind of a one-size-fits-all? You're taking all that complexity and turning it into an API. Well, if Viper was the only product that we deployed, that would be correct. And it would be quite hard to build such a product. That's the idea of saying, I remember IBM used to have this advertising where they had this Swiss Army knife-like thing that solved everything. And they actually said, you can't really build that. I kind of agree with them. So Viper's not trying to be all things for all people. What it's trying to do is to say, we will have a heterogeneous set of capabilities in the infrastructure. There are differences between a block storage architecture, a file storage architecture, an object store. There are differences even when you go to an all-flash array. If you've looked at our ExtremeIO acquisition, it's a software play, but it's an entirely different way to think about random access information stores. And not to get too detailed on it, but that is very different than a VMAX. It's very different than a VNX. It's very optimized for that new class of capability in the infrastructure. That's all wonderful, but the reality is as that horizontal expansion occurs, we can't inflict that on the higher-layer services, the higher-layer applications, and even the VMware environment shouldn't really know that that stuff's going on. So the Viper layer is not trying to replace all of that technology. It's trying to go directly after the complexity that comes as you have diversity in your infrastructure. Put a normalization layer in place. And what's fun about it is not only is it normalization in terms of presenting a common set of APIs and interfaces northbound, it also makes our life easier on the storage side because our storage products unfortunately had to deal with n number of northbound experiences. And that created overhead on management complexity, APIs, I mean every protocol you can imagine is implemented on these things. Well now if they can talk to Viper around dealing with the next generation of object stores and they present their services through Viper, they only really need one API between the storage array and that normalization layer. Northbound out of Viper, there may be multiple interfaces. For instance, on our first release of object, it's HCFS, S3, and Atmos. Imagine if we had to put all three of those in every product that we built independently. It would just be unwelving from a complexity perspective on the R&D side. The trick is, do what you need to do with the normalization layer that makes the system work better. It sounds like an operating system. You know, decouple and highly cohesive elements working together. So okay, so let's build on kind of your vision for the future. So you're at EMC, you knew the job, so you have limited data, you have fresh eyes right now in the company, you're probably tapping yourself like I'm not a storage guy when they interviewed you. You did some due diligence, right? Tucci laid out today, $5 billion a year spend on innovation, organic and M&A. What's the 20-mile stare for you, John? Did you look out on the horizon? Things to work on, things to attack? Sure, sure. Well, I mean, that's a very big question. I mean, if you go 20 miles out, it's a little bit more complex. You have to go five. So again, full disclosure, this is my opinion. This is not necessarily reflective of all of the activity that we are doing, but one of the reasons I came here was to, I thought I had a reasonable opinion about where the world could go, and I thought EMC had the tools that could actually get us there. I actually believe that while we've done a fantastic job and continue to do a fantastic job of arming the enterprise to basically navigate the transformation of the enterprise, the consumerization, the hybrid cloud, all the things that we talk about, what's very interesting is if you go to the other domains, if you walk over to the service provider world and you take a look at what they're doing, their challenges are actually very similar to what we're navigating in the enterprise world. In fact, if you listen to our earnings call for the last quarter, when Joe was talking about our performance, the largest or the fastest growing segment of our revenue was actually in the service provider space. It grew 40%. Yeah, it was massive growth. Why? Because they're actually a lagging indicator of the kind of evolutions that we've been driving in the enterprise. Most service provider infrastructure, especially in the Tokozide, is not even virtualized yet. There's a whole thing called network function virtualization where they're trying to rethink moving from a hardware composition to a software hardware decoupling on the network side. When we think about hybrid clouds, we all think that's a reality today, but there is no seamlessness between the public and private side, really. And so we have huge opportunities to extend our technology into the other half and also deal with interworking between them. When we jump over to the consumer side, candidly, business to consumer interaction is going to drive a huge amount of opportunity for us around big data analytics, but also around security. RSA, the vast majority of the things they secure are B to C, not B to B, interestingly enough. Silvertail, that acquisition is about securing web but e-commerce transactions. It's not about necessarily the stuff that happens inside of the enterprise, stuff that happens outside. So I actually, to answer the question very candidly, I think as we go forward, the most interesting thing about EMC's future is the application of the technology that we're pioneering in the software-defined data center and the next generation of virtualized enterprise in these other domains as they're evolving because they're evolving largely at a slower pace or a different pace than the enterprise has done, because the enterprise, quite frankly, has rapidly accelerated into the virtualized world where the other worlds have not necessarily done it to the same degree. Will our technology map naturally? Probably not, but do we have the core competence and the core presence and the understanding of the intellectual capability and the R&D budget to go after it? Absolutely. And the result of that is... And you have a good view, you have a good view, looking at your background, you've got the new understandings going on at the network layer, you understand the carriers, you kind of understand the big picture across the multiple industries. So I got to ask you the software-defined networking or network virtualization, open flow, whatever is being called, however it's evolved from a semantic standpoint, that's clearly changed the mindset of folks. Software-defined data centers now more of a marketing term, but it's a destination. So how do you look at that journey of software, right? Because now you got in memory, you got persistent memories, that's changing the address space of how software will develop. What is the future of software in this new world? Well, I mean, software is clearly the building block. I mean, there are hardware building blocks, but the vast majority of innovation is going to happen at the software layers regardless. But let me kind of re-vector your question a little bit. When you start thinking about this new world as we evolve forward, let's just fast forward out to a world in which non-volatile memory is large scale. Compute is almost infinite, 100 core CPUs. The amount of data that's being stored is measured in exabytes or zettabytes in an enterprise, for instance. How do we navigate that? What does the world look like? And one of the things that's very clear to me is that there's actually a new composition of the overall IT infrastructure starting to form. We're clearly going to have what we describe as a persistence layer where we scale out and store information for very long periods of time at the exascale or beyond. That's kind of the evolution of the classic storage industry, and it will be multi-technology, multi-interface, and it will be geo-resilient, multi-vendor, everything. But there's another tier that's happening, which is as we move storage and information closer to the compute layer, what we're realizing is we haven't even begun to define the data services and the functions to make that a coherent system. You know, slapping a flashcard or doing DAS on a server is not a system. It's a hard drive. But when you have thousands of them and they're scaling to terabytes or petabytes, we absolutely have to think about how we're going to tie them together and make them work coherently. Now you could argue that may or may not be EMC's core business today, but we're in the flashcard business. We clearly know how to build data services, and what we're starting to see is that there is an unmet requirement of actually making that storage tier around performance intelligent and coherent going forward. So huge opportunities for us to actually navigate and participate as the world evolves into this kind of two-tier hierarchy. Well, and the other interesting thing to me, and I wonder if you could comment, is what happens to application design as you move to that world that you just described? Because today, you know, databases are relatively small. You make a few calls on them. They're very separate. You know, to get them together, it takes a long time. And business processes are wired around these databases and very inflexible. So the answer is really simple. I mean, if you go to massively distributed systems, there's a well-known principle, and that says if you build applications for massively distributed systems, if you try to build them to be coherent and synchronous, you will fail. And so what's happening with applications is we're starting to create loose coupling, asynchronous, asymmetry, eventual consistency, and so that changes everything, and it has consequences on where you store information and where you do the processing. Imagine a world where an application that's primarily living up in an in-memory database tied to a particular CPU in an asymmetric fashion is actually offloading functions that happen in maybe a scale-out tier of storage with compute that actually post-processes or is looking at something in parallel in an asynchronous way, and then ultimately, eventually, these things recombine and answer a complex question. Where the compute lives, where the application lives, it won't be one monolithic piece of code. It will be a distributed, relatively asymmetric software architecture living over a very distributed, relatively asymmetric set of infrastructure services. Now, that's not going to happen overnight, but we're clearly heading in that direction because trying to build a tightly-coupled, synchronous system at that scale ain't ever going to happen. So other than thinking about this stuff a lot, what do you do? Well, my job is interesting. I am the official herder of cats, is the best way to describe it. EMC has this fantastic federation, but to navigate the future, the federation has to kind of work in a coherent way. And so I was brought in very specifically to run those cross-functional initiatives in the company around the technology. You know, the question's about, how do we have a coherent approach to software-defined storage? How do we deal with object across the entire company? How do we build our systems to be deployed in the hybrid cloud? And that's not independent of individual products, it's actually tying those products together, making sure that they work coherently, and that at the end of the day, the system comes together intelligently. Good example is what I'm doing with Paul right now is Paul wants to win in the big data analytics world and HGFS and Hadoop are a big piece of that. So part of my job is to make sure that across the rest of the EMC ecosystem, we have a clear plan in a consistent way to make sure that we are an enabler of that vision, which means new protocols, new technologies, and new technical coordination. So it's a fun job, a lot of work. A lot of orchestration. It's orchestration. You're a conductor. I'm the orchestration layer. You've been in DJ at EMC. John, thanks for coming on theCUBE. You're awesome, great content. You're a tech athlete, as we say. We could go on for an hour talking about this new world. I mean, that's a great picture you painted, love it. And you've got a clean canvas. Within EMC has given you full latitude and orchestration. It's going to be fun to watch. Thanks for coming on theCUBE. Really appreciate it. We'll be back with our next guest here inside theCUBE after this short break. We'll be right back.