 Live from Las Vegas, Nevada. Extracting the signal from the noise. It's theCUBE covering IBM Edge 2015, brought to you by IBM. Welcome back to Edge 2015 everybody. I'm Dave Vellante with my co-host Stu Miniman. Ken King is here as the GM of IBM's open power. Ken, welcome to theCUBE, great to see you. Thank you for having me, enjoy it. So Edge, it's nice that Edge is expanding. When Edge started a few years ago in Orlando, it was primarily a storage show. It's expanding now, a lot of Z-talk, power, some middleware, so that's got to please you. Well, absolutely. I mean, we are a systems business, we're a solutions business, and we like to talk to our customers across the entire spectrum, and what we're delivering for them across the entire business, and that's what we're moving with Edge, is enabling our customers to understand the entire portfolio of systems capabilities and solutions we can provide for them. So, obviously you've got a good base of customers in power, you're trying to expand it, you've made some moves, opening up the platform, the ecosystem, where are you today can seeing the momentum in power? Well, I mean, when we look at the open power initiative, the real objective of what we're trying to do is enable the acceleration of innovation. And by opening up the platform, it's really opening up the ecosystem of partners and providers that will deliver solutions on top of the base architecture. We've got an ecosystem of players that has grown dramatically over the past 12 months, and we started 12 months ago, and we have five founding members. We now have 125 in the ecosystem, and those are members that are across the entire stack from chip providers up through OEMs and ODMs, accelerators, GPU, FPGA, memory providers, software providers, universities, and across the entire stack. And globally, we have members of the ecosystem now that are contributing and innovating and collaborating to build new innovations on top of the power platform around the world. And from our perspective, that's enabling an acceleration of innovation on the platform. So it's not just IBM delivering vertically integrated servers, but IBM and our partners delivering new innovations to the market, which we think is real exciting for the platform. Why are partners joining? I mean, obviously you've got the push. I mean, IBM, big company, smart people, you can go out and reach people and bring them into the system. What's the pull? Why are they joining? What are they asking for? They're an open platform. Our partners like developing an open platform. The alternative is a closed architecture platform that is very limited in what the partners can deliver. In many cases, that closed platform provider will continue to vertically integrate capabilities which squeezes out those partners, limits their profit margins and capabilities. They see this as an alternative that can drive an open platform that enables them and their innovations to be more successful in the marketplace and drive more return for them, as well as the fact that collaborating jointly with IBM, we can jointly deliver accelerated innovations. You look at Moore's Law and some of the challenges going on there. From our perspective, that's really started to diminish. The laws of physics are really impacting Moore's Law, but yet the explosion of big data is driving the need for performance even more aggressively. And we feel through the collaboration with our partners up and down the stack, we can address that and readjust the trajectory of Moore's Law to drive the performance improvements necessary. That's an open ecosystem. We got a lot to talk about here still. Yeah, so Ken, you guys actually held a summit back in March, really caught my eye. Companies like Google and Rackspace. I'm wondering if you can help our audience understand there's so many new open foundations out there. There's the Open Compute Summit that happened earlier this year that we had theCUBE at. I'm going to be in Vancouver for the Open Stack Summit next week. Rackspace, obviously, big representation there. How does open power fit into this whole stack? But it finishes the equation, basically. It provided, and Rackspace even said this. One of the guys from Rackspace made a comment saying we finally have the firmware and the hardware element of the entire open platform. So for example, Rackspace announced that they're going to be delivering a solution that's based on the Open Power Foundation. It's going to be an open compute reference design implementation with open stack software on top of it. End-to-end open capabilities that they're going to be delivering into their hyperscale data centers. Very compelling. We've got Google's announced that they're doing things with the planers and some of the hardware reveals they've had. The expectation, the hope is without sharing details because we're not allowed to Google deploying in their data center. We have other hyperscale data centers around the world that are in the processes of working with us on power. Not yet ready to go public, but it's over time. You'll see more and more hyperscale data centers that are working with us in deploying open-based implementations in their data centers. And they can do that, working closely with IBM and with ODMs to design these new white box based solutions that are specifically targeted for the workloads they're trying to address within their data centers. So it creates a model for us that enables us to address a market space with the power architecture that may not have been part of our core historical base but enables us to drive more effectively in a new model with our partners. It's really interesting. I mean, I think about VMware just lived on x86 and there's so much that it brought to the ecosystem. We've seen power into soft layer and I heard you mentioned containers. Can you talk about how containers fit with power? We're actually kicked off the show talking about how Z-Linux on a Z-13, we can run Docker inside that. There's lots of buzz. We'll be at DockerCon later this year. And the real hyperscale players, I mean, containers is driving a lot of that discussion of the modern application. Yeah, and that's something that we're implementing within the power solutions as well as having those container capabilities built in. You'll see more and more of that integrated into the platform. And as we continue to open up the architecture, it makes it easier for us to do that. For example, we've recently migrated to Little Indian Linux, right? Where all the core distros, Red Hat, Sousa, Ubuntu, now all support Little Indian Linux on power, which makes it very easy to deploy applications on top of power or migrate solutions from x86 over to power. Because it's a simple recompile or for interpretive languages, it's no recompile whatsoever. And you add containers to that, it makes it very easy to now implement solutions on top of it as well as into the hyperscale data centers. Well, what that does is it makes the whole binary compatibility issue irrelevant. It goes away. And it goes away. Now you bring in a whole new set of applications that can very easily migrate. I mean, it's kind of a bigger play on the Linux on Z move, right? We had sort of opened up that platform. I want to go back to this whole notion of innovation and what you said about Moore's Law. It's a really interesting topic. I guess, so a lot of people have said Moore's Law is dead. It's certainly begun to attenuate in its classic form, intensity on the chip. And now we've found new ways. We had Pat Gelsinger on last week. Obviously he's qualified to talk about Moore's Law. I asked him, is Moore's Law dead? He said, no, what people underestimate is the human power and the ability to come up with new innovative ways to keep power doubling every, whatever, 18 months. Regardless, innovation it feels like is no longer fundamentally coming from just this Moore's Law of doubling every 18 months. It's coming from being able to combine technologies on top of a platform and leverage them. We think of self-driving cars. We think of ways. Look at ways. It's a combination of technologies. Look at Uber. There's combination of technologies. I wonder if you could talk about that. So where are you guys specifically on Moore's Law? I mean, IBM's got a lot of credibility in this space and tie that into innovation and where it comes from in the future. Yeah. Well, if you look at trajectory of Moore's Law right now, when you look at specifically from the Silicon, you know, doubling every 18 months at price performance point, it is degrading. It is no longer at that point where it's doing that just from the Silicon itself. And so some of our competitors are looking at ways of trying to integrate, trying to extend that and integrate more and more into the base Silicon to address that. But you know, laws of physics are the laws of physics and you get down to seven, five nanometers. You know, eventually there's got to, you've got to have an alternative. But, you know, as you're going through that process, you know, with the explosion of data and the ability and necessity to be able to, to be able to get insight from that data, we need even more performance capabilities than we've had previously. So you've got, it's got to come from other places. And from our perspective, creating that open ecosystem, that open collaboration model, we're able to drive that through other layers of the stack, whether it's GPU acceleration, tightly integrated with the CPU, whether it's FPGAs tightly integrated through what we have as our CAPI attached integration, whereas it's basically a coherent architecture. Like you can look at it as another processes, processor within the, within the board that acts like it's within the main memory but takes in other capabilities and treats it like it's using the same memory address space. So it's as if it's on board, right? It's persistent, it's coherent. And by integrating that and creating FPGA capabilities to integrate that and integrating more closely GPUs with CPU, that drives the type of performance we need above the core chip. So those integrations, integration of the memory, software capabilities that we're seeing integrating more closely with the chip are driving that trajectory of Moore's law back to where it needs to be versus just a silicon itself. And a lot of work we're doing with NVIDIA around GPU acceleration. We've just won the $325 million bid with the US Department of Energy with, and I'll talk about it tomorrow with some of my compatriots here from Oak Ridge and Lawrence Livermore. The main reason we won that versus our competition was because of open power, was because of the capability of integrating the GPUs and CPUs to drive those kinds of five to 10x performance improvements above and beyond what the current architectures can deliver. So it's those levels of integration, those kinds of integration from our perspective that are driving the next level of performance improvements which requires that collaboration up and down the stack. When we look, Ken, at some of the big data workloads and problems that are being solved, we look back to the HPC world and we say, wow, there's a lot of affinity between those two worlds. Talk about the roots of power or the presence in HPC and is that a tailwind? I presume it, I guess it is a tailwind, but how is that manifesting itself into innovation, into development of your market? Yeah, so there are people that said when IBM divested of its system X business that we were leaving HPC, because that was a big business for it, but it's exactly the opposite. Our big investment now in HPC is in power. You look at the base capabilities of power with the eight threads per core and the extensive memory and throughput that it provides. And then you add to that the openness of the architecture, the CAPI interface, and then the ability to be able to integrate GPUs and FPGAs to that, that is targeting directly big data workloads. And we're doing it in a way now where we're calling data-centric computing. So a model that we've put in place around the HPC space which basically says you integrate more and more of the compute capability into the storage, into the network, into other elements where it's not, you're not constantly moving the data to the compute. You're moving more of the compute to the data. Because now with the kind of data you have to manage that you have to process, that you have to move, it becomes infathomable to be able to manage that and get the insights quickly from that if you're constantly moving data to compute. So part of that framework, that model is when we call data-centric is moving compute to data. And that's all associated with all these integration capabilities that I was just talking about with GPUs and FPGAs, et cetera. And so when we see that then instantiating itself in HPC but then migrating to other kinds of workloads as well. For example, we've got a solution we just brought to market called the Data Engine for NoSQL. And basically what it is, it's our power processor with Cappy-attached flash storage. So it treats the flash storage as if it's in memory even though it's direct attached. FPGA accelerators from Altera and then the NoSQL engine from Redis on top of that. And it dramatically accelerates through the FPGA accelerators and the Cappy-attached flash dramatically accelerates the ability to process the unstructured data, the NoSQL data coming through the Redis application versus your traditional workloads on an x86. And in fact, you can shrink it down 24 to one consolidation, one server, actually 12 to one, one server plus one flash storage device a rack of 24 blades for x86 provides the equivalent performance. So that same performance versus 24 rack of 24 blades, dramatic reduction in cost and in energy that that provides and throughput. So that's an example of a commercial implementation of a big data kind of solution leveraging the same kind of capabilities we're putting into HPC. It's great, I love that example. So you're basically providing this platform, this open ecosystem that delivers capabilities that you're arguing or beyond where your competitors are with the difference being your promise to the community is we're not going to eat the white space. Is that right? Am I getting it right? Well, the promise of a community is we're working with our partners to deliver those solutions jointly, those new innovations jointly with the partners versus taking those and eating it, as you said. But we're also bringing the partners capabilities back into our offerings. So it's part of our, it's still our partner capability, the part of IBM solutions we're delivering to our clients. So we're delivering solutions to our clients that would include our terrorist FPGA capabilities or include NVIDIA's GPU capabilities as part of the solution offerings. It's not just a black box, it's now a solution offering that addresses client needs more effectively than maybe would have been done previously with an IBM only offering. And you're offering your largesse as a go-to-market player to the ecosystem, which is unique. Yeah. And that's another reason why our partners, the ecosystem is growing so fast as they see the value of being part of this ecosystem and how we can go to market. Partners like money. You show them a path to profits, they're going to pay attention. And customers like innovation. And customers like innovation. So we're running out of time. I had two questions for you. How are you dog-fooding this stuff? Stu mentioned software, other examples of how you're bringing this internal within IBM, whether it's cloud or even internal IT? Yeah, so I mean, I mentioned the data engine for an OSQL, right? Software where we're now at the open power bare metal servers that have included capabilities from Melanox for a network bandwidth capabilities. And in Tain actually created the base server. So by using a third party ODM that developed that server, now we have a capability at the under $6,000 price point that meets the software cloud bare metal capabilities working with our partners versus IBM trying to vertically integrate that type of solution ourselves. Another example of leveraging the open ecosystem to create an offering specifically targeted for a space that wouldn't historically be an IBM space for developing power servers. Another example is we just brought to market a recently a GPU acceleration offering, our SA24L offering, which has got NVIDIA's GPU capabilities built into it. So for clients that want to leverage the combination of GPU acceleration with CPU where GPU is appropriate, we now have an offering we're bringing to market with NVIDIA's capabilities. Yeah, just three or four good examples of where we're bringing it in and integrating it or eating our own dog food, or eating our dog food of RecoSystem. Yeah, excellent. We're getting the hook Ken, really great story. A lot of skeptics, like you said when you sold your x86 business, can you do it? Seems like there's a lot of momentum, the ecosystem is growing. There's always room for alternatives in technology. So congratulations on the progress and thanks very much for coming on theCUBE. Glad to have been here, appreciate it. All right, keep right there, everybody, we'll be back. This is IBM Edge, this is theCUBE, we'll be right back.