 Okay, welcome back everyone. This is live in Las Vegas, IBM Impact. This is theCUBE, our flagship program. We go out to the events, extract the city from the noise. I'm John Furrier, the founder of SiliconANGLE. I'm joined by my co-host, Paul Gillan with SiliconANGLE. Our next guest is Brad McCready, vice president at IBM Fellow of the Systems and Technology Group, welcome to theCUBE. Oh, thank you, John. You've been on before, Dave Vellante, the big flash announcement in New York City. You've been on theCUBE before, but now we're here at the event. Big news here at IBM Impact. Opening up the power platform, opening up in the cloud. Give us your take on your first experiences from Impact this year. Well, I got here about an hour ago. So, my first experience is a great interview. That was the cab ride. Great interview with SiliconANGLE at theCUBE. But, you know, there's a lot of good energy here. A lot of good energy, you know, been going around checking out some of the booths and, you know, certainly in our power booth, you know, open power, you know, just as you mentioned, you know, we're showing, you know, the first power white box systems, you know, something that just hasn't really happened much before. This is the first time we've done it. Taking our technology that we usually put in the data center and the data centers and putting it in the white boxes, getting ready to go into the hyperscale or warehouse scale data centers. So, on the power we were talking earlier about, you know, some of the big revolutions around open compute, hacker culture around some of the hardware. What is the big thing around the power systems? Obviously, in San Francisco, you have the announcement with the foundation. Open is a key message. How does that change what you guys are doing internally? Could you share a little bit of color on, you know, how you move from the black box to the white box and what's involved? What's changed? What's new? What's the enabling piece? Yeah, you know, it is different. You know, that's one of the things that we've been working on and things do have to change. We have this technology, this great technology, very few people in the warehouse scale data centers say, oh, we don't like the technology, but when it's sealed in a computer that has a door on it that you can walk into, it's not packaged up and ready to go into a warehouse scale data center. So, as we start going into that, you know, we're taking that same technology but we're opening it up. We're enabling people to build SOCs out of our chip technology. We're enabling other people to manufacture and design systems and sell them with enablement from my team. I lead the development team to help them build those systems. We're got an open source firmware and software stack. So, you know, you can't go into a warehouse scale data center with proprietary code. You got to go with open source code. So we created an open source stack to enable them to go into the data center as well. So those are some things that we've done to take the same, you know, base technology but move it into the right way that's consumable by warehouse scale data centers. But Brad, why now? I mean, power's been around for a long time. Suddenly it's big move toward openness, toward open source and sharing the spec and bringing on partners. Why was now the time to do that? You know, there's about three things that kind of come together at the same time that we really feel this is the right time to make this move. You know, the first is which, you know, many people have talked about in the industry a lot which is Moore's Law. You know, we clearly don't see Moore's Law continuing on the same rate and pace it has been. So in order to keep that cost performance coming down, you have to innovate now beyond just the processor and technology. You've got to innovate with accelerators, accelerated networking. And so the first reason is, is to decline to Moore's Law, you need to do innovation across the whole stack. That's what OpenPower is about, bringing all the partners to do that. Second thing is, you know, the PC has driven so much of the server industry for so long. It drives our standards, it drives the volume base for DRAM and things like that. As PC starts to go down in volume, we're going to have to innovate new memory solutions and things like that with our OpenPower partners in order to pick up what we're losing with the PC standardization. So that's the second thing that's causing you know, OpenPower to be the time for that. And the third thing, of course, is, you know, we're kind of going into what people call the third generation of warehouse scale data centers. First generation was black box, second generation, everybody started kind of building their own white boxes. And then the third generation, we believe, is going to be innovation across the whole stack, SOCs, silicon innovation combined with IO and storage innovation and things like that. So as those three things come together, we think now is a really good time for OpenPower and the innovation that's going to bring to our partners to the data center. Is this also a recognition that Intel doesn't have it anymore, that the X86 is running out of steam? I don't think you would say it's running out of steam. I don't think I would phrase it that way. What I would say is, if you're going to drive the advancements right now, you've got to bring in lots of people, lots of partners. Like if you look at OpenPower, we have Malinox, NVIDIA, Altera, Xilinx, Google, all the FPGA vendors, I mean, all the memory vendors, Micron, Samsung, all these people have got to come together and innovate in order to move the ball forward. That's how hard it's going to be. If you're just at home playing with your transistors, playing with your silicon and your microprocessor design, you're going to have a really hard time bringing the innovation to move the ball forward. Which is a way of saying Intel was not doing that very effectively. You saw this as a time for IBM to take the initiative. Yes, I think that now is a good time for us to get in there and take the initiative to try and bring those people together to do that innovation. Brad, talk about the folks out there. I mean, obviously, you can get into the enabling side, open source stat, that's all the modern era. We call the modern era computing, it's upon us. A lot of folks look at IBM and they're, oh, wow, IBM got rid of the ThinkPad and the laptops and the desktops. Now the servers, the low-end servers, we're kind of pushed out to Lenovo as well, that announcement. So this is more of the high end of the computing scale, web scale, warehouse scale. What does this mean? I mean, obviously, getting into the low-end commodity side of the business is a margin issue. Steve Mills talked about that at Pulse. But how do you describe when someone says, hey, I thought you were getting out of the server business? So what this is about, this is driving innovation. So we have the open consortium is driving the innovation with lots of partners, lots of new things around memories, accelerations, IO. And then we see that innovation is feeding into our server business and driving that forward. Which is very important that we continue to move the server business forward with innovation there as well. So we have innovation taking place, let people build their own, roll their own out in the hyperscale space. A lot of people do that. Then of course we'll have our core IBM server business going strong, taking that same innovation and moving it into our customer sets. I hear from people all the time, oh, I want to be more like Google, I want to be more like Apple and Facebook and Amazon. They build their own stuff. Obviously the scale that they're at is kind of like a one-off, black swan, unicorn, whatever you want to call them, they're different. A normal enterprise still has needs, large scale needs or web scale or hyperscale. What are some of those innovations? Because Doug was talking about the Linux investment with Flash memory at the converged layer. He's talking about new software paradigms where just things are changing. The ability to persistent memory, for instance, changes the software model. So what are some of those things? Can you get in the weeds a little bit and share? What's going on in the kernel and the Linux? Are things changing? What's the big, ah-ha? So over in our open power booth here at the demo, we have six demos. I'll give you a little bit, a few of them, some examples of the innovation we're doing. We have a very unique way we can attach FPGA accelerators to our processor. We can attach it coherently. So now you can have an FPGA share the memory that our processor's sharing. We implemented a couple of algorithms on that. We implemented Monte Carlo and got a 250X speedup over running off of a normal processor by doing a Cappy attached FPGA of a Monte Carlo algorithm. We did the same thing with a key value store with Memcached with our partner Xilinx, we did that one. Sped up 35X, a whole key value store. Doing that, but again, we had sharing a memory between the processor and the accelerator and that really enabled some easy to program and a lot of acceleration. We attached some flash to our processor. Got about a 5X cost reduction on a key value store, took all of the key value store out of main memory and put it into a very low cost flash solution. So those are the kind of things that are going to keep moving forward in the warehouse and in the regular data center and those are some of the advanced things that we're doing with our partners and open power. This isn't just like buying a box drone in the server rack, this is engineered differently. What is the things that you could point to to saying the one or two things that are just the levers of the innovation? Is it the memory? Is it the flash? Is it the software? The truth is, is if you go and look around, we actually have to do it. Silicon scaling was such a powerful force in our industry. I mean, when you got that transistor going twice as fast and about half of the expense of every 18 months, that just drives everything. So what do we have to do now? We got to pull on every lever. You're going to see innovation at the network level. You see innovation at the storage level. You're going to see accelerators with FPGAs, GPUs. You got to be pulling on all those levers if you're going to replace what Silicon used to bring to our industry. So that's the next scale point. Absolutely. Yeah. Go ahead, I'm sorry. Is this going to create some new complexity in the market for buyers, given that you're talking about systems that are really special purpose. They're built for a particular use case scenario. So will buyers have new decision points to consider in all of the accoutrements that they buy with a processor compared to what they've been used to, which is just Intel inside? Yeah, the general purpose processor, the general purpose processor isn't going to carry the mail I could use. And so if you're going to want to keep going down on a Moore's Law Life curve on cost reduction, you are going to have to look at, let's call them use cases, some general buckets of use cases. Hey, I'm going to run key value stores. Okay, let's find some ways to speed up key value stores. Okay, over here, I'm going to run web front end servers. Okay, let's find some things to speed up Java over here. And it may be different. They'll probably will be different. So there will be some buckets and use cases, I think going forward, as you try and buy servers or get servers that are keeping you on the Moore's Law. As you work with your partners and they have access to the Power Systems reference document, can they customize the processor itself for a different application? Yeah, so we actually have some partners that are doing that already, customizing the CPU for use cases around security. We got some partners that are customizing around the networks and the peripherals around the power case. We have both ends of the spectrum taking place now. And what quality control, if any, will IBM retain over that process or the foundation for that matter? Or is that not really something you worry about? No, no, it is something you worry about. And there's, you know, just like you talked about, you got everything from a black box to a gray box to a white box. You know, if you look at it from an IBM perspective, you'll see that kind of a range. But you'll also see, you know, within the foundation, there will also be architectural, you know, compliance and regulations to make sure you stay compliant with the architecture. Because the most important thing is the software's got to work. And then to do that, you got to keep that architectural compliance. You talked about the emergence of the hyperscale data center. I mean, for the people who don't know what that means, how do you define that? That's true. It's one of those fuzzy terms. But usually when you think of the hyperscale data centers, you're thinking of, you know, the very, very large public clouds. You know, then you think of Google's and Facebook's and Amazon's are obviously the top three to come to a lot of people's minds. But it's a very, very large public cloud. Software, his IBM's, large hyperscale data center. Some question we talked to Doug about earlier was Google's participation in the foundation. How important is Google's endorsement going to be going forward? I mean, beyond joining the foundation, but actually building a product, implementing a product, maybe even commercializing a product. How much, how important will that be to the success of what you're trying to do? But Google's participation in the data and open power is really, really important in that, you know, you need to have all perspectives in the foundation. You need to have the perspective of users. You need to have the perspective of the developers. You need to have all of those perspectives in the foundation. And, you know, Google does a great job of bringing that guidance and that guiding hand as to what the users of this type of technology are looking for. In terms of being an analytics engine, what are you doing with power to make it better tuned to the analytics needs, the analytics market that you're attacking? Yeah, just kind of what I was discussing a little bit earlier, you know, we're taking and putting advanced I.O. and things around the processor, as well as in the processor. I mean, if you look at the processor itself, it has more threads than other processors. We have larger caches than other processors. And on top of that, you know, what we're showing here at the impact, you know, is we're also showing how then we've gone and taken in a very advanced way, attached flash to have very, very close low latency flash right next to the processor. You can put, you know, 12, 24 terabytes of, you know, direct attached data right there next to a CPU to enable it to handle those big data sets and big data analytics. What kind of performance improvements are you seeing from that? Oh, we're showing, you know, 10X improvement, you know, with some of our implementations around a high-speed network attached and then, you know, even more importantly, like with the example I showed with the flash, it's really going to enable us to show a 5X cost reduction, because we're really going to be able to do with flash what people were doing with DRAM. And, you know, DRAM's about five times more expensive than flash. You know, and when you look at these big data problems right now, all of the cost of it's in storage. So we're seeing a big bubble in the market right now, the flash bubble. We are joking, Dave Vellante and Paul were joking about the storage bubbles of yesteryear. Pure storage, just got a $3 billion valuation for all flash array. What is all the hubbub around this market right now? You mentioned FPGDA, the acceleration piece is key. Is it one piece? How do people get their arms around this bubble or innovation cycle, if you will? Because there's innovation, certainly. But the valuations of these technologies seem to be a little bit out of the range of sanity. What's your thoughts on that? I mean, how, I mean, is it real innovation or just the marketplaces don't know how to value it yet? And I think, you know, as I talked about, you know, when we asked about like why now? You know, and I talked about these three discontinuity striking. You know, when you hit these discontinuities, what do we do? We all tend to fan out and try lots and lots of things. And it's going to get shaken out and weeded out and we're going to come back and we're going to converge on a few things. You know, and I think we're in that spread out phase right now, kind of like I think you were describing, right? You know, we've got a lot of people. You can go to the literature, there's tons of investigation on FPGA. We've got two or three FPGA projects going. You know, you got your GPU acceleration. You got, you know, we didn't even talk much about the advanced memories yet. You know, people are trying to take cost out using MRAMs and phase change memories and all that. I think I see all of these things going forward and they're going to go forward for a while as we test them and try them and beat on them and then we'll pick one or two. But we're definitely in the weeding out phase. So what is the impact of the underlying architecture or above architecture, above the chips? Because you got databases, now you have in memory, no SQL, SQL. So you got a lot of variety, diversity of different pieces of the puzzle. You got data layers, you got database layers, you got analytics layers. Is there, I mean, how do you make sense of that? I mean, just, you know, is it independent? I mean, it used to be you're both on a database and you've done your mission, Mongo, that's only one aspect of it, but. Yeah, I think, you know, I think you're, you know, that part isn't changing that much. I think the software structure isn't changing that much. I think you're seeing different players as the open source pieces of software are coming in and, you know, in some cases simplifying, in some cases making things more complex. You know, how much effort you're going to put in yourself makes a big difference on what you can use, right? You know, while people are coveting the lower cost, you know, it comes at the price of more, more role your own at that lower cost. How about the CIO? So what's going to change in your mind? You know, this is more of a vision question, but I'm the CIO, I've got a large data centers. I'm running a big bank and I'm like, okay, I got all those data centers out here. Obviously, footprint's important. What's the big change, sea change, that you see in the data center? With cloud, obviously you're going to tap into the cloud. I mean, with the build your own, which is now the trend, the maker movements upon us, the homebrew, computer club, the data center, whatever you want to call it, we've called it a variety of things, but now you have significant footprint issues. You have now enabling technologies is that, what will the data centers look like for that CIO? What should be thinking about? How does he lay that out? We haven't talked about it much, but what I think is really going to be, if I were a CIO right now, where I'd be reading and studying and executing on, is really to get to the hybrid model. We didn't talk about it a whole lot, but I think one of the most important things, you know, that the CIOs need to be thinking about how you can take everything and move it all to the cloud, or off-prem, or keep everything within my four walls, but how am I going to make these two things work together? You know, that's going to be the win. The guy that figures that out first is probably going to be the guy that wins. Because- Without disrupting their operations. Yeah, absolutely. With all those usual constraints, you don't lose any data in the process, don't lose any compute in the process, but I think the hybrid model is the one that's going to survive. That's Brad's opinion, and I think that's the one, if I were a CIO, I'd be really looking real hard at. It's how I can do my critical stuff on-prem and then go off-prem. You're out here, one of the things you're doing, this is sort of the formal rollout of the Open Power Foundation here at this event. You're talking about your roadmap for the future. Can you, in a simplified form, present where you're taking power over the next two years? Well, the rollout that we're doing here isn't necessarily as much the power rollout, but it is the Open Power rollout. And what we're really describing with the Open Power rollout and sharing with everybody is the innovation that we're trying to form across the very broad spectrum of partners in Open Power. So we are showing, just like we've touched on here, a place that we're taking it with FPGA acceleration, advanced memory, advanced IO, and networking acceleration, GPU acceleration. We're also showing some of our very first white box systems. We're showing a reference planer, one socket reference planer. It's our first white box systems over there in the booth. So we're trying to show all the different avenues that we're innovating on, which we think are all key technologies that are going to enable the industry to move into the third generation of warehouse scale computing. To what degree is the future of power itself influenced by members of the foundation? We definitely got a, and I actually described this to my IBM team. I keep painting the picture over and over again, the two-way street, where we get influenced by the foundation members, and they influence what we build and the technology to try and learn from them. But then the same thing, we take that technology from the foundation as I described earlier, move it into our products to make the products more differentiated and more productive. So it is a definite two-way street where the foundation's influencing us and we're influencing it. Brad, final question for you in the segment. What should the folks out there who are watching really pay attention to around what's so important about this impact? Why is this event here in Vegas really, really important to them? So you got to realize you're talking to the hardware guy in the middle of a software conference. I mean, is this scale a new capability? Is there a distinction anymore? They're clearly blurring together. I mean, when we talked about the use cases, they are blurring together. And I think, maybe I'll go ahead and take that angle on my answer to your question, which is, hopefully as you go and you look at some of this stuff, is that we're showing how we've taken the hardware and really, really applied it to the software use cases to get something that's one plus one equals three versus just, here's your hardware, go find your software, match them up at the end, and take what you get. Because that's the kind of stuff we're going to need to do to keep moving the technology forward. And the software part is pretty flexible. You can pretty much use open source stacks and or whatever the developers want to innovate on top of. Yeah, absolutely. And if you look at most, I think even all of IBM, not all, but there's large pieces of all that we do that has that meshing of maybe an open source foundation with differentiation on top. I think you're going to see more and more of that. It gets the job done, that's the key, right? I mean, time to value is always in the top mind, but we're in a massive change. Brad, thanks for sharing your insights here on theCUBE. We'll be right back after this short break. Thank you gentlemen. All right, thank you.