 Live from Las Vegas, Nevada, it's theCUBE, covering EMC World 2015. Brought to you by EMC, Brocade, and VCE. Hi buddy, we're back. This is Dave Vellante. I'm here with my co-host, Stu Miniman. Stu, welcome. Thanks, Dave. It's Wednesday. It's getting a little long. My voice is doing okay, but I just messed up a name too. It's like, you know, we are human here on theCUBE. We are not robots. Randy Bias is here. He's the Vice President of Technology at EMC's Emerging Technology Division. Randy, welcome to theCUBE. I have to say, the last person I thought I'd be interviewing at EMC World 2015, but congratulations. You have to dig this like super years ago. No. Come on Dave, think back. I remember VMworld 2010, when we brought like, it was like, who's theCUBE and what are they? So we brought all our friends from the cloud, and Randy was there. And yeah, I mean, I bumped into Randy so many times in public house, and I used to work at EMC, and we'd catch a little bit of grief sometime for some of the, you know, sand technologies and various things. I gave EMC a very hard time at about 20 times. So well, as I said, you're the last person. Open source, open stack. I'm like, that's not going to happen. What? You know, cloud sailing, congratulations. Thanks. You know, how was it? How's it feel, being inside the... It's great. EMC is awesome. The way I like to tell people is, you got the Core Technologies Division, and then you have the Emerging Technologies Division, whose job is to disrupt the Core Technologies Division. And my job is to disrupt the Emerging Technologies Division. So break glass, challenge assumptions, and help them be better business. Well, so you did a great keynote this morning, and a lot of themes that you typically don't hear at an EMC keynote. I said, open source, talked a lot about open stack. Yeah. Talked a lot about the community. You know, that's your ethos, right? Yeah. You're bringing it in, how is it? Is that not rejecting the organ? Oh no, there's a huge appetite, the leadership level to make a change. And I was telling somebody the other day that, you know, the interesting thing about Project Copperhead, our open source scene of Hypercontroller, is that there was no resistance at the leadership level. Everybody wanted to do it. And there was little resistance at the rank and file engineer level. They were all happy to kind of have be part of an open source project and kind of get it on their resume, right? Most of the resistance was in that middle tier, the people who wanted to have to were on the hook for executing and are worried about, you know, what's going to happen to their margins and their business models, and go to market and delivery times. You know, it was all that stuff. It was all the mechanics. And once you went through the educational process and got to the other side, it was amazing because everybody across the entire org just lined up right behind it. And I had people giving pitches to other people that suddenly came out of my mouth. I was like stunned. It was amazing. So what does it mean to open source the Vipercontroller? Obviously there's the whole community aspect, your open source and that, but talk about that. But I'm also interested in the whole business model. What changes that portends? But start with, what does it mean to open source the Vipercontroller? Yeah, so it was really interesting. You know, there was a bunch of resistance in antibodies that came up as we started the process. And one of the concerns was, you know, we're going to lose, you know, business. And some people are us maybe lose half of our customer base who would all use the open source. Well, when we went out and looked at the revenue distribution, it was the power law distribution curve where, you know, maybe 10% of our customers were providing 80% of the existing revenue for Viper, you know, very large scale deployments. And I said, those people are always going to pay us because they're using it for certain in production. And the long tail, those are people who are kicking the tires. We give them the open source to make it easier for them to use it, build it into a big POC with very low friction, maybe add their own driver for their own obscure storage technology they want. And then when they're ready to go in production, they're going to come back to us. And, you know, when people finally got it, it was great because they realized there was no impact. So how about the licensing model? Has that changed to the comic? It's a dual licensing model. We have commercial version, which is Viper controller. And that's a standard enterprise license sold exactly as we've sold it before. And then there's Project Copperhead, which is the open source version of that, the exact same code base. And you get Project Copperhead off GitHub and there's no support, but you have access to the code. You can replace one of them with the other, they're completely interchangeable. And that commercial model is a subscription model, or is it a traditional? No, it's a standard enterprise software license, dual license model. So it's licensed in the 18% for maintenance, or? Open source license here, yes. Now is that common in the open source land, or is that different than? There's a bunch of people who do that, like Asterisk has a business model like that. There's a bunch of different business models in open source land. I had to run everybody through those. The reason we chose this direction is that, you know, there's a lot of challenges to open sourcing in a company like EMC. Like how do we sell, you know, the Red Hat Enterprise subscription licenses, how the sales people get comped, of what time period do they get comped, all those. And so what we decided to do was kind of punt on that because it would slow us down. EMC's trying to move a lot faster. So with the dual license model allowed us to have the same sales motion on the enterprise software license side, but have Project Copperhead open so that customers aren't locked in. And then over time, we can look at other options for selling kind of more Red Hat model. Yeah, Brandy, I wonder if we can talk a little bit about stacks. So when I looked last year, one of the things that I thought was a, you know, moment that people go back and say, wow, that was really important, and I can't believe I missed it, is EMC started selling commodity hardware. If you look at what was inside ECS, it was, you know, a standard ODM box. It was not kind of the traditional model of what I expect from EMC. Cloud scaling, you used a lot of those ODM type solution to help build your stack. Gives a little insight as to, you know, how's EMC looking to approach some of these new models? I heard a little bit about Caspian this week, you know, so how does that whole solution go together and how's kind of the EMC of kind of today and tomorrow different from the EMC of yesterday? Yeah, I mean, we really pioneered that. We brought like quanta computers into create telecom and AT&T. So, you know, we were very comfortable with the ODMs. And when I started that process, I was just trying to steal the page from Amazon and Google, you know, they seem to have figured something out. Let's just do what they did, right? So I'm not brilliant or anything. So, you know, the thing is, is that if you go back and you look at the EMC leadership, Joe Tucci is telling everybody that customer, of the four things customers are asking for, number one, number two, are open source software and cost hardware. And so we're just trying to respond to that. That's what Project Caspian's about. It's like, look, customers want open source software. They want cost hardware. They want to know they're not locked in. They've got to have been a neutral solution. They want to be able to get in under the hood. If there's a problem, they want to know that they can kick you out if need be. And I think that that's great. It is a problem if you come out of our size that can actually compete on innovation and service and support. And I think a lot of others can't. So we're actually going to help drive that. We want people to move more to open source and to cost hardware. And Project Caspian, I don't want to get into too many details because we're just doing the sneak peek right now, but we're looking at really turning the tables on the old school good market for hardware appliances. Another way to really think about this is, customers want open source software and cost hardware, but here's the thing. Your average enterprise can't staff up like Google and Amazon and Microsoft. They can't become, they can't have those kinds of engineering teams. So how do you package something up for them so that it's consumable for an enterprise, gives them the benefits of web scale and answers those needs? Well, I mean, to your point, Randy, on that, you look at the web scale guys, they really don't have infrastructure teams. They have teams of smart people that build an application that can really withstand little bits of things falling. I mean, the old chaos monkey, going to kill everything down there. Infrastructure, you have people that bring in the racks and hook everything up. And at the end of the day, they pull out the stuff that died, but they're not there tweaking knobs and adjusting it, which is a very different model from what the enterprise and what IT traditionally has been in the enterprise. They're treating the infrastructure like a power plant. And then, you know, if any of the pieces go down, they just find how to get more power from somewhere else. They don't, they're not sitting there worrying about speeds and feeds constantly and trying to tune and groom everything. Let's talk about hyperconverged. Let's do it. So that's something that you guys both talk about. You wrote a blog post, you commented, you guys argued. What's the crux of the argument? Randy, you feel like the way the industry is defined hyperconverged is off where it should be. I can't stand the way it's defined. Well, the definition of hyperconverged even inside of EMC is that it's compute and storage combined. And this drives me crazy because like, that means my laptops hyperconverged, mainframes hyperconverged, like it makes the term hyperconverged useless. Now, some of the other definitions I've seen that are kind of around the edges are more like software defined infrastructure or kind of like a more software-centric architecture of hyperconverged versus converged infrastructure. And I love that. I think that that makes sense, right? Because in that case, hyperconverged, if you think about it as being a software-centric architecture instead of a hardware-centric architecture, it tells you clearly how it's different from converged infrastructure, and it tells you what it is. But then, it doesn't necessarily mean that the compute and storage need to be together. If you go back and you talk to early VBlock customers, and this is less of a problem now, but if you talk to early VBlock customers, they'll tell you they would run out of storage oomph on a single VBlock and they would go to VCE and VCE would say, you need to buy another VBlock. You know they got brand CPU, right? Hyperconverged has the exact same problem. I mean you can't scale, I mean the way it is today with our competitors like Nutanix and SimpliVity, you can't scale the storage and compute independently. You got what you got. So if you run out of IOPS. Yeah, so. Okay, so you're saying software should be the the infrastructure. It's a software-centric. Yeah, yeah, and absolutely, I agree 100%. It's got to be about the software. It's not about the box. I've still yet to have a customer that came to me and said, you know what's going to solve all my problems is convergence. Convergence is like you think of sheet metal in a box and to the scalability concern there. Many of the companies out there are having, I can have compute heavy or storage heavy nodes. I can mix that so when I create a pool I will have some flexibility. It's not one box that I can do that. I mean even back in the early days of VCE it was, you know, there was one model can get in one color, it's black, you know. So VCEs matured over time. VCEs software is helping to turn them into a platform that they can add compute separate or storage separate and put multiple different types of pieces together. We talked to Praveen about that earlier this week. So, you know, it's early days and we look at some of these point solutions and for the mid-market they might not need all of the things that we're talking about but at least our vision is to what we call server sand at Wikibon is it's about scalable architectures, it's about distributed things. I mean that's the future of everything we've been talking about on IT the last few years. From an application standpoint and infrastructure needs to support that model. Yeah, I mean for the mid-market I think the way that the current hyper-converge offerings is fine but the problem is that, you know, if you look at Google it's not a homogenous system. It is what I call a relatively homogenous system and this is, it's hard for people to understand because in the old way of doing things like when we went into Korea Telecom they had 13,000 servers with 526 hardware configurations across 26 vendors. Compared to Google's 10 hardware configurations but Google doesn't have one hardware configuration. One of the problems with the current go-to-market hyper-converge is they kind of want to sell you unicorns like Seth did in the open-source storage market where they can believe that, you know, there's one homogenous system that's going to service all your needs and the reality is that there's a variety of workloads so you still need tiering to storage, you still need compute that's designed for high CPU, high memory, you need to scale storage and compute and networking all independently and most importantly you got to separate the control plane from the data plane when you run the APIs and the control software in a hyper-converged system on the same place as the data plane it's a massive security. But it sounds like hyper-siload managed through software. Hyper-siload managed through software, I love it. So I call it hyper-optimized if you look at what Amazon does right they make a specific configuration that's going to live in one environment which is theirs and for a single application and they build at a massive scale. Problem is the enterprise is very different they can't build infrastructure for every single application. Virtualization help give us, you know, a platform that we can put lots of applications in there. They used to but virtualization is the optimization that's sort of a great side problem. I mean I spent a decade trying to get people out of that silo of let's make, you know, a temple for each application and tweak all those knobs, you know, so do we want to go back and? No, so the difference is that the silos the control planes were siloed. Yeah. That's the fundamental problem, right? So all the management on the top end and the user interface and also on the back end was all siloed, right? When people complain about silos they're complaining about the management of the silos. Like right now they've got a ray after a ray after a ray and they have to manage each ray independently and it kills them, right? Now in a more modern world we have silos still but I like to think of them as sort of tiers for different kinds of workloads but the control plane on the front end and the back end is actually a single uniform control plane. And that's a manager of a manager though or? No, I think you look at something like OpenStack and that's like a great way to sort of have like a common control plane like. Good example, right. If you've got OpenStack and a customer can see in the service catalog that they can order a high IOPS block volume or a high capacity one and they don't care, they just want to know that the handshake they made in the API to get the black device comes through. On the back end that might be extreme on one place and scale IO for the other, right? That's what I mean. You're going to have different technologies for different workloads. You can't collapse it all. You can for a specific use case like I'm going to put like hyperconverged under VDI, right? Use of hyperconverged, right? But you're not going to put hyperconverged across your data center for all your workloads. It's not going to happen. The workloads are two different. Randy Bias, we're out of time. We got to go. The disruptor inside the company that's disrupting itself for the next disruptive age. Randy Bias, thanks very much for coming on theCUBE. Stu, appreciate your help and I wish we had more time so you guys could get into it. We'll get into it more in OpenStack in Vancouver. All right, keep it right there. We're back with our next guest is theCUBE. We're live, EMC World 2015. Right back.