 Live from Las Vegas, it's theCUBE. Covering Edge 2016, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. Welcome back to IBM Edge, this is theCUBE, the worldwide leader in live tech coverage. Callista Redmond is here, she's the director of OpenPower. Welcome to theCUBE, good to see you, thank you for coming on. Thanks for having me. So OpenPower, you know, when it started, there were a lot of skeptics and oh, okay, you know, it's sort of a Hail Mary by IBM, but healthy skeptics, but I think you've proved that this is a real deal. Thanks, thanks. So congratulations. So give us the update on OpenPower. Well, you know, we're about three years in and we've successfully gone from five members, the initial sort of renegade crew to 262 members. So we've had tremendous growth. I mean, these are folks who are really investing in the architecture, investing their time, resources and energy to make Power a very compelling platform in the market. Not just a viable one, but a compelling one. So that's a good point, right? You got to have viability, but viability in and of itself is kind of table stakes in this game. So what does it mean to OpenPower? That means basically everything, all the IP that IBM has developed put into the open source. So we are basically making it easy for everyone to have a starting point. Whether you're coming in at the chip level, the board level, the system level, the integration level through the software and being able to really optimize through every layer of that stack. And now we're even moving from software into industry specific instantiation. So we're getting into a lot of the traditional HPC space through personalized medicine, instrumented science, really seeing some compelling differentiation that you can achieve on Power. We're also moving very quickly into cloud and enterprise workloads. In many ways, it's a little bit of HPC for everyone. We've all got a lot of data to sift through and value to glean from that. So we had Tom Rosamillian earlier this morning and we were talking about the hyperscalers and he said, hey, we're an arms dealer. We as in, not IBM, but the systems division. I want to sell into those hyperscalers. It seems to us that the best chance of doing that is power and generally an open power specifically. So talk about hyperscale as an opportunity for you and what kind of traction you always see. Google was one of the five rogues. And so they're a hyperscaler, I think. So... They're pretty sizable, you know? What's the opportunity there and how are you guys doing? You know, I think it really speaks to the consumption model. I mean, the consumption model for systems generally has bridged beyond the traditional systems providers. It's not just a game for IBM or any of the other big systems houses to be in. But in some measures, those hyperscales are going directly to manufacturers in Taiwan. We want to be the arms dealer to that platform, right? So whether they are going directly to engaging with a manufacturer or whether they want to engage directly with IBM, we're here to help, you know, move that forward. The other consumption model shift that we've seen is obviously in cloud. I mean, you can't talk to anyone right now that does not have a cloud strategy, whether it's private hybrid, public cloud strategy, that is essential for open power. And we truly believe that we can be the arms dealer for those cloud providers as much as those hyperscalers. And, you know, so those two consumption models, whether it's direct in the manufacturer or through cloud resources, you know, on-prem, off-prem, what have you, we need to be able to serve that need. And to do that compelling, in a compelling fashion, you have to have workloads that really take advantage of that hardware. So Clista, how important is little Indian in that whole equation in terms of binary compatibility with all the applications that are out there? And what kind of catalyst has that been for your business? So I think the decision to go from big Indian to little Indian or byte ordering was as important as to the open power strategy as it was for us to take an open approach to the architecture, right? Because without the capability to migrate workloads quickly and easily, to port and recompile and get to parity with your x86 systems, and then to start adjusting the dials and start getting your magnitude improvements, without making that decision, the hill was too high to climb. Okay, so you're basically attacking this, I know, Stu, you want to jump in, with an ecosystem approach. What are your aspirations for the ecosystem in terms of its market penetration? I saw some data the other day, about 20% of the market that you're looking for by some certain point in time. Can you clarify that first? 20 by 2020 would be good. 20 by 2020, is that? Yeah, yeah, so we really feel that power is a compelling differentiator to kind of aggregate workloads on the platform. And for us to effectively do that, we have to put a stake in the ground and go for it. To do that, we need to continue building out that community. And that community is not just composed of the ISVs that are coming to the platform. It's also the in-house developers that are already tuning their own workloads to best perform in their data centers. And so for that, we're doing numerous POCs. We're adopting and embracing many open source databases. And that's a true statement that you'll see across the IBM portfolio. It's not power alone. Callista, so often open source equates to drama for those of us that watch the environment. Take back to the earliest days of Linux. Watch OpenStack for the last bunch of years as to who owns it. Even more recently, Docker is getting a lot of discussion as to, is Docker having too much control over it? IBM, of course, the founder of power and doing open power. How have you avoided stroke that balance between allowing the ecosystem to grow, making sure that IBM can still make revenue and allowing the ecosystem to flourish? By making power much more relevant and more compelling to the industry, to our stakeholders across the community, not just to the software providers who want to diversify the platforms they're on, but to the end user who wants freedom of choice and a long-term durable strategy. Just as much as each one of those has a cloud strategy, they also have an open strategy. I mean, open is very much mainstream now. It's not relegated to some corner cases in the data center. Having the ability to have the base building blocks across your portfolio of workloads, of software that you're deploying in your data centers is completely critical to your strategy. And it speaks to the price performance as well. I mean, you've got to be able to sort of leverage the base building blocks as a key piece of accelerating your development for those workloads. And containers are going to be a big part of that, right? Can you speak to kind of a global impact that open powers have? I guess especially want to understand how China sees this as an opportunity. China has done a lot with open source and know there's interest in open power there. So China is very interesting. I would say that they are the biggest highlight on sort of the domestic IT agenda. They are very keen to have a local IT economy. What country doesn't want that, right? They want to invent it, produce it and consume it in China for China. And you can start with a blank sheet of paper, or you can take an open approach to that. And by providing an open approach to that, by ensuring that we are with them every step of the way, we are providing them with the most durable strategy to go forward. And for that, you've got to have sort of the state level permissions and levels of support. And we've been able to sort of build that trust, build those relationships. And that translates to volume, right? That translates to volume. So China clearly wants to be self-sufficient, not only just for local markets, right? Potentially for global markets. It's got what, some number four, maybe of the top 10, top gun supercomputer results, maybe even more than that. It's got its own Linux operating system. So do you see IBM become, I mean, China becoming a global power? And why is IBM comfortable with that? Because it's part of your sort of ecosystem? It's part of the broader community, absolutely. I mean, we have use cases that are very alive in China that we've talked about through China Mobile, through Tencent of large-scale deployments that are comfortable on power and who are investing in power. So that gets you volume, drives costs down, brings adoption up. And brings additional workloads. It fosters that broader community. The community is the end user. It's the software providers. It's the hardware providers. Everyone is looking for that traction to get going in particular markets. And IBM is part of that value chain, obviously, with software and also hardware components, integration components. Where specifically does the IBM power group within the systems division play in that whole ecosystem? Where do you make your money? So where our business model is really at all steps of the chain, right? I mean, we can come in with systems. We have a great portfolio of power systems to offer to those clients. We have a great portfolio of services that we can offer. We have a wonderful stack on the software side that runs really well on power. And so across the value chain, as power becomes more compelling in the market, we can participate in many parts of that. And we also participate in some of the IP licensing aspects as well. Okay, and let's talk about OCP. You mentioned you were there this year. Open compute, yeah. What's the state of open compute? What's IBM's role? So you can imagine for yourself a nice Venn diagram. Open compute is very focused on a particular form factor. Open power is focused on a particular architecture. You overlap those where we can do an open power, open compute form factor together. With Rackspace, we've really sort of honed in on a great model with the barrel-eye system. And that is available in market today. That's an excellent example of cross-community collaboration. There's no reason that open power needs to come up with some brand new form factor when we can sort of leverage the inertia that open compute is already gleaned in that space. So Rackspace is a consumer of that platform. Is that right? They're a consumer. They're also a developer. And an inventor, if you will. Yeah, exactly, exactly. So they're playing multiple roles here. In fact, at our open power summit in April, we got to hear from Rackspace and Google together saying they're happy with what they're seeing on power eight. They're already developing for power nine. In fact, they showed images of that mother board at that summit. Does that then tie in with OpenStack? Rackspace obviously has a very diverse offering now, but at their core they were one of the creators. Exactly, so we're not here to create new wheels where we don't need them. Leveraging the best in class across different pieces of where that system will play, from the firmware through to the board and the chip, these are things that are very important to us. When you talk to customers like Rackspace and others, what's their motivation? Is it workload specific, data intensive workload? Is it they're trying to cut down on power? What's the business case for them? So in the case of cloud providers, they want to get more VMs on a machine. They want to lower their cost dimensions. They also want to make sure that they've got the pricing right. So if we're able to lower those costs, which we've been able to effectively do with our cloud providers, then that presents a case for them to move to power. They would also prefer that the underlying architecture, their underlying infrastructure is transparent to some and others don't want to think about what's under the covers. They just want to drive the car. They don't want to look under the hood. So that becomes very important as well. We want to be able to present things that are easy to consume and making those as consumable as possible through opening up the APIs, opening up the level of integration and design as much as possible. So 20% by 2020, what are the kind of phases that you have gone through and have to go through to achieve that objective? So the chapters that we're on, we've gone from whiteboard in year one to here's the vision, here's a couple of PowerPoints on why this is strategic, not just for today, but going forward to year two, really felt more like a science fair where we had 15 pieces of hardware that hadn't existed before, lots of proof of concept things going on. To now we're in year three. This is about adoption. This is about deployment. This is about real use cases that are being deployed in data centers that matter. And that sort of starts to get the wheels turning across multiple parts of the industry. And then beyond that, it's roadmap. You mentioned Power 9. What can you tell us about Power 9? So Power 9 is great. We're going to be doing a couple of different versions of the chip, one for scale out, one for scale up. We're going to continue to have as many autobonds as possible attached to that chip. So we'll continue to have NVLink. We're going to have Cappy, PCIE4. These are the interconnects to the chip to really take advantage of that. Performance are very useful to our system designers and our integrators. And some of this is radical performance. You talk about Cappy, we're talking about major advancements in performance relative to what we've been used to. It's a huge step function. Isn't it, is my understanding correct? I mean, no one is going to switch architectures if you're only able to glean one or two percent improvement. You've got to get one or two magnitude improvement, right? You've got to multiply that. And then that transition hill is easier to climb. So at least 10x is kind of where this is headed, right? 10x for some. Maybe 100x. 5x for others, you know. We can hit the 100x on some workloads. But it is really important that we're able to get that magnitude improvement. Otherwise, it's just the hill is too high and the return is not big enough. And that one to two order of magnitude improvement is it might necessitate new thinking about how you write applications and so forth. But the justification is there. I think everyone is starting to reevaluate how they're writing applications. Because in our estimation, systems of the future are all going to include acceleration. So for that, you're going to need to offload particular parts of your work to those accelerators in order to get that parallel processing. And that's going to be essential in systems as we go forward. Well, when you think about how applications are written for decades, it's been, OK, there's going to be some spinning disk, which is super slow. And I'll be able to do other things while that happens. And even Flash hasn't dramatically changed that. Certainly, it's sped that up. But in terms of application design, you still have that horrible storage stack and that IO that takes place. You're attacking that in new ways. So you would think that that's going to allow developers to take the gloves off and really create new types of innovation. Are you seeing more than glimpses of that today? Or are you seeing even glimpses of that? Biggest glimpses of that are where we're seeing a leverage of open databases, the Connecticut examples, how much you can do in a very small amount of time, gleaning value from multiple sources at the same time. Not just one spinning disk, but many. So being able to capitalize on that is not only going to change how you can get performance out of your existing applications, but also where you can leverage and move industries forward. Is this where we come to the next Uber, the next transformation of a particular industry? Exciting times, Kalisto. Thanks very much for coming to theCUBE and sharing what's going on with open power. Really appreciate it. Thanks. Thanks for having me. You're welcome. Keep it right there, everybody. Stu and I will be back with our next guest. This is theCUBE from IBM Edge. We'll be right back. Oh, hey.