 Live from Las Vegas, it's theCUBE, covering IBM Think 2018, brought to you by IBM. We're back at IBM Think 2018. You're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, and I'm here with my co-host Peter Burris. Ken King is here, he's the general manager of Open Power from IBM, and Sumit Gupta, PhD, who is the VP, HPC, AI, ML for IBM Cognitive Gentlemen. Welcome to theCUBE. Thank you, thanks for having us. So, really guys, a pleasure. We had dinner last night, talked to Bob Picciano, who runs the Open Power business. Appreciate you guys coming on. But I got to ask you, Sumit, I'll start with you. Open Power, Cognitive Systems, a lot of people say, well, it's just the power system. This is the old AIX business, it's just renaming, and it's a branding thing. What'd you say? I think we had a fundamental strategy shift, where we realized that AI was going to be the dominant workload moving into the future, and the systems that have been designed today or in the past are not the right systems for the AI future. So, we also believe that it's not just about silicon and even a single server, it's about the software, it's about thinking at the rack level and the data center level. So, Fundamentally Cognitive Systems is about co-designing hardware and software with an open ecosystem of partners who are innovating to maximize the data and AI throughput at a rack level. So, I remember talking to Steve Mills, probably about 10 years ago, and he said, listen, if you're going to compete with the main, you know, with Intel, you can copy them, that's not what we're going to do. You know, you didn't like the Spark strategy. We have a better strategy, is what he said. And we're going to, you know, our strategy is we're going to open it up, we're going to try to get 10% of the market, you know, we'll see if we can get there. But Ken, I wonder if you could sort of talk about just from a high level to strategy and maybe go into the segments. Yeah, absolutely. So, yeah, you're absolutely right on the strategy. You know, we have completely opened up the architecture. Our focus on growth is around having an ecosystem and an open architecture. So, everybody can innovate on top of it effectively and everybody in the ecosystem can profit from it and gain good margins. So, that's the strategy. That's how we design the open power ecosystem. But, you know, our segments, our core segments, AIX and Unix is still a very big core segment of ours. Unix itself is flat to declining but AIX is continuing to take share in that segment through all the new innovations we're delivering. The other segments are all growth segments, high growth segments, whether it's SAP HANA, our cognitive infrastructure and modern data platform, or even what we're doing in the hyperscale data centers. Those are all significant growth opportunities for us. And those are all Linux based. And so, that is really where a lot of the open power initiatives are driving growth for us and leveraging the fact that through that ecosystem, we're getting a lot of incremental innovation that's occurring and it's delivering competitive differentiation for our platform. I say for our platform, that doesn't mean just for IBM but for all the ecosystem partners as well. And we were able that a lot of that was on display on Monday when we had our open power summit. So, I'll talk a little more about the open power summit. What was that all about, who was there? Give us some stats on open power ecosystem. So, it was a good day. We're up to well over 300 members. We have over 50 different systems that are coming out in the market from IBM or our partners. Over 20 different manufacturers out there actually developing open power systems. A lot of announcements or a lot of statements that were made at the summit that we thought were extremely valuable. First of all, we've got the number one server vendor in Europe, Volatos, designing, developing P9. The number one in Japan, Hitachi. The number one in China, InSpr. We've got top ODMs like Supermicro, Wishtron and others that are also developing a power nine. We have a lot of different component providers on the new PCIe Gen4, on the open CAPI capabilities. A lot of announcements made by a number of component partners and accelerator partners at the summit as well. The other thing I'm excited about is we have over 70 ISVs now on the platform. In a number of statements were made in announcements on Monday from people like MAPD, Anaconda, H2O, Connecticut and others who are leveraging those innovations bought on the platform like NVLink and the coherency between GPU and CPU to do accelerated analytics and accelerated like GPU database kind of capabilities. But the thing to have you the most excited on Monday were the end users. I've always said, and the analysts always ask me the questions of when are you going to show penetration in the market? When are you going to show that you've got a lot of end users deploying this? And there were a lot of statements by a lot of big players on Monday. Google was on stage and publicly said the IO is amazing, the memory bandwidth is amazing. We are deploying ZEUS, which is the power nine server in our data centers and we're ready for scale. And it's now Google strong, which is basically saying that this thing is hardened and ready for production. But we also had a number of other significant ones. Tencent talking about deploying open power, 30% better efficiency, 30% less server resources required, Alibaba, Cloud, the cloud armor of Alibaba, talking about how they're putting it on their X Dragon, they have it in a pilot program and they're asking everybody to use it now so they can figure out how did they go into production. PayPal made statements about how they're using it with machine learning and deep learning to do fraud detection and we even had Limelight who's not as big a name, but they're a CDN tool provider to people like Netflix and others who are talking about the great capability with the IO and the ability to reduce the buffering and improve the streaming for all these CDN providers out there. So we were really excited about all those end users and all the things they're saying that demonstrates the power of this ecosystem. All right, so just a comment on the architecture and then I want to get into the cognitive piece. I mean, you guys did years ago, Little Indians recognizing you got to get software-based to be compatible. You mentioned Ken, bandwidth, IO bandwidth, Cappy stuff that you've done. So there's a lot of incentive, especially for the big hyperscale guys, to be able to do more with less. But, Samit, let's get into the sort of AI, the cognitive piece. Bob Pitchiano comes over from running a $15 billion analytics business so obviously he's got some knowledge, he's bringing in people like you with all these cool buzzwords in your title. So talk a little bit about infrastructure for AI and why power is the right platform. Sure, so I think we all recognize that the performance advantages and even power advantages that we were getting from Denard scaling, also known as Moore's Law, is over. So people talk about the end of Moore's Law and that's really the end of gaining processor performance with Denard scaling and the Moore's Law. What we believe is that to continue to meet the performance needs of all of these new AI and data workloads, you need accelerators and not just compute accelerators. You actually need accelerated networking. You need accelerated storage. You need high density memories sitting very close to the compute power. And if you really think about it, what's happened is, again, system view, right? We're not Silicon View, we're looking at the system. The minute you start looking at the Silicon, you realize you want to get the data to where the compute is or the compute to where the data is. So it all becomes about creating bigger pipelines, factor pipelines to move data around to get to the right compute piece. For example, we put much more emphasis on a much faster memory system to make sure we're getting data from the system memory to the CPU. Coherently. Coherently. That's the main memory. We put interfaces on power nine, including NVLink, OpenCAPI, and PCIe Gen4, that enable us to get that data either from the network to the system memory or out back to the network or to storage or to accelerators like GPUs. We built and embedded these high-speed interconnects into power nine, into the processor. Nvidia put NVLink into their GPU, and we've been working with partners like Xilinx and Melanox on getting OpenCAPI onto their components. And we're seeing up to 10x for both memory bandwidth and IO over x86, which is significant. And we're also seeing, you should talk about how we're seeing up to 4x improvement in training of MLDL algorithms over x86, which is dramatic in how quickly you can get from data to insight, right? You can take training and turn it from weeks to days or days to hours or even hours to minutes. And that makes a huge difference. And what you can do in any industry as far as getting insight out of your data, which is the competitive differentiator in today's environment. The outcome people want. But let's talk about this notion of architecture because, or systems especially, the basic platform for how we've been building systems has been relatively consistent for a long time. The basic approach to how we think about building systems has been relatively consistent. You start with the database manager and you run it on an Intel processor. You build your application. You scale it up based on SAP needs. There's been some variations. We're going into clustering because we're doing some other things. But you guys are talking about something fundamentally different. And flash memory, the ability to do flash storage, which dramatically changes the relationship between the processor and the data, means that we're not going to see all of the organization of the workloads around the servers, see how much we can do with it. It's really going to be much more of a balanced approach. How is power going to provide that more balanced systems approach across as we distribute data, as we distribute processing, as we create a cloud experience that isn't in one place but is in more places? Well, this ties in exactly to the point I made around, it's not just accelerated compute, which we've all talked about a lot over the years. It's also about accelerated storage, accelerated networking and accelerated memories. This is really the point being that the compute, if you don't have a fast pipeline into the processor from all of this wonderful storage and flash technology, there's going to be a choke point in the network that, or there'll be a choke point once the data gets to the server. You're choked then. So, a lot of our focus has been first world partnering with a company like Melanox, which builds extremely high bandwidth, high speed. And we'll be aware of. Right, right. And I'm using one as an example, right? I'm using one as an example and that's where the large partnerships, we have like 300 partnerships as Ken talked about in the OpenPower Foundation. Those partnerships is because we brought together all these technology providers. We believe that no one company can own the agenda of technology. No one company can invest enough to continue to give us the performance we need to meet the needs of AI workloads. And that's why we want to partner with all these technology vendors who are all investing billions of dollars to provide the best systems and software for AI and data. But fundamentally, it's a whole construct of data-centric systems, right? Right. I mean, sometimes you got to process the data in the network, right? Sometimes you got to process the data in the storage. It's not just at the CPU. The GPU is a huge place for processing that data. How do you do that all coherently and how do those things work together in a system environment is crucial versus a vertically integrated capability where the CPU provider continues to put more and more into the processor and disenfranchise the rest of the ecosystem? Well, that was the countervailing strategies that we want to talk about here. I mean, Intel wants to put as much on the die as possible. It's worked quite well for Intel over the years. You had to take a different strategy. If you tried to take Intel on with that strategy, you would have failed. So talk about the different philosophies, but really, I'm interested in what it means for things like alternative processing and your relationships in your ecosystem. This is not about company strategies, right? I mean, Intel is a semiconductor company and they think like a semiconductor company. We're a systems and software company. We think like that. But this is not about company strategy. This is about what the market needs, what client workloads need. And if you start there, you start with a data-centric strategy. You start with data-centric systems. You think about moving data around and making sure there is heterogeneous computing. There's accelerated computing. You have very fast networks. So we just built the US's fastest supercomputer. We're currently building the US's fastest supercomputer, which is, you know, the project name is Coral, but there are two supercomputers, one at Oak Ridge National Labs and one at Lawrence Livermore. These are the ultimate HPC and AI machines, right? It's, computers are a very important part of them, but networking and storage is just as important. The file system is just as important. The cluster management software is just as important, right? Because if you are serving data scientists and a biologist, they don't want to deal with how many servers do I need to launch this job on? How do I manage the jobs? How do I manage the servers? You want them to just scale, right? So we do a lot of work around scalability. We do a lot of work in using Apache Spark to enable cluster virtualization and user virtualization. But if we think about, I don't like the term data gravity, it's wrong a lot of different perspectives. But if we think about it, you guys are trying to build systems in a world that's centered on data, as opposed to the world that's centered on the server. That's exactly right. You got that right? That's exactly right. Yeah, absolutely. I think you guys got to go, we got to wrap, but I just want to close with, I hope Thomas and Bruce Emilio always says infrastructure matters. You got Z growing, you got power growing, you got storage growing. It's given a good tailwind to IBM. So guys, great, great work. Congratulations. Got a lot more to do. I know, but thanks very much. It's going to be a fun year. I appreciate it. Thank you very much. Appreciate having us. All right, keep it right there, everybody. We'll be back with our next guest. You're watching theCUBE live from IBM Think 2018. Right back.