 Live from Las Vegas, it's theCUBE. Covering Edge 2016, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. Welcome back to IBM Edge, everybody. This is theCUBE, the worldwide leader in live tech coverage. Dr. Stephanie Cheris is here. She is the Vice President of Power Systems offering management at IBM. I know I'm probably leaving some things out of there that could throw in cloud, I guess. All right, but welcome to theCUBE. Thank you, and thank you very much for having me back. Good to see you, yeah, a CUBE alum. That's great. I didn't have the opportunity to interview you, but I'm honest to you. That's right, but Stu and I have met before. Yeah, happy to be back. Great, so you were telling us you're prepping for the main tent tomorrow? Yes, yes, prepping for the main tent. So tomorrow we get an opportunity to talk a bit about, really, tomorrow is all about the individual, right? The power of the individual to have impact in today's IT world between the access to infrastructure has changed dramatically through things like cloud. And so the ability for individuals to impact the industry, right? And their company as well as just the world is better than ever, right? So we talked, I know you spoke to Jason Ponton, right? And I'll be having the innovators under 35 with me up there to kind of demonstrate what an individual can do to really change the industry. Oh, that'll be fun. Yeah, it'll be great. So what are you guys going to talk about? Give us a little preview without showing too much. We get a bit about their personal stories, right? And I think we're all driven by our personal stories and our passions. And so they get a chance to talk about their innovations. And I'll get a chance to talk about why I do infrastructure, right? Which is my passion. So it's kind of nice. Passion and infrastructure. Absolutely. So that's cool. So power used to be kind of this boring marketplace, right? And then all of a sudden open power happened. Yeah. And how did that change your life? Yeah, no, it's a whole new power systems today. Really with the launch of Power 8, which coincided with the launch of open power systems, it's a different ballgame for us. I have responsibility in power systems for the Linux portfolio. And we changed our approach to Linux when we launched Power 8. We came out with the ability to support Big Indian and Little Indian in the processor, which made us much more relevant in the broader Linux ecosystem. We also focused on open power. So an incredible focus on creating an ecosystem, not just at the software level, but all the way down to the hardware level. And we're up to over 250 members now. And that's helped us expand our ISV ecosystem. At the end of the day, it is all about ecosystem. And we have over 2,500 now ISVs running on Linux on power. So it's changed. It's changed dramatically, both our approach and I think also how Linux is being used by clients has changed dramatically. What was it? A few weeks ago was the 25th birthday of Linux. That was kind of exciting. It's amazing to see how it's grown up. It's grown up from being a hobbyist sport to running corporations. Well, I mean, I loved during the 25th anniversary. There's a great tweet that went out. It showed all the components of a cake and said, good luck, make it your way in. So here's your cake, good luck with that. I love that idea of kind of the individual contributor. How much of that is based on open source, things like open power and of course Linux? And how does IBM make it kind of individual beyond kind of open source and community? Yeah, so I think the whole open source and Linux, if it's done nothing else, it's taught us that development is done differently and that rapid innovations can be done through a community. And so that was really fundamental to our kicking off open power. As we look at how technology has advanced in the end of Moore's law and the challenges in the processor level, it's really about creating a system. And how do you do that with an ecosystem of partners all across the stack, everything from the processor level all the way up to the application level. And so as we look at how individuals contribute, IBM is a huge proponent of feeding back into the open source community and the IBMers are doing that every day. But then in addition, it's about pulling together the community to find a way that it all pulls together across a system stack to bring value. So the end of Moore's law, that's there's some red meat right there that can chew on for a bit. I'm fighting words for certain folks, but so where's the innovation coming from? Because this industry is marched to the cadence of Moore's law for decades. So where's the innovation come from? First of all, let's test that. So Moore's law is the end of Moore's law. What does that mean to you and what does that mean to IBM? So I came from the processor level, right? So I did silicon technology for a long time. And at the end of the day, it's all about return on investment and shrinking down a gates at this point to get the return on investment for performance. It's not happening right at the same rate that we were used to. So client value is not coming at twice the performance at a client level value just based upon shrinking your gates alone every 18 months. And the commitment that it takes in order to do that and to shrink, it's just not going to happen. The laws of physics fundamentally will prevent it. So once the Moore's law meets the laws of physics, physics will win. So now it's about expanding beyond just the processor level and just the silicon. It's about pulling in things like accelerators and accelerator technology. It's about having a breadth of those accelerators in order to feed all the new workloads that are coming in on Linux. But it's a different ballgame today, right? It's not just the processor that will provide a difference. It's essential. It doesn't take away. The architecture is important, but it is about the system and the system stack all the way up to the software that's going to bring client value. And that's what we have to focus on. It's not just about speeds and feeds. It's about client value. So IBM has some street cred here. Stephanie, you weren't even born yet, but Stu, you won't remember, but when IBM moved from ECL to CMOS, that was a big bet. It was a big bet. They probably talk about that in the halls of IBM. They're back in the days, Stephanie. When we used to break the ice and wash our face and walk to school three miles, I'm sure you hear those stories. But that was a huge bet that IBM made that a lot of people thought was foolish, an IBM's part, but obviously paid off. So what is the bet you're making now? Okay, let's agree that Moore's law is peaking, physics, et cetera. So what's the bet on now? Where's the innovation curve? So for Power Systems, we have focused, and I think you'll see this, both in the recent announcements around Power 9 at Hotchips, but also in our announcements around our recent LC servers, right? Called Minsky, the SA-22LC for high performance computing. It's about bringing in IO capabilities that allow acceleration to be pulled in differently and tighter, closer to the processor. So we have built our processors with IO and interfaces to be able to pull an acceleration closer to that processor that can't be done on any other platform. So it's about creating those gateways and those paths for the ecosystem to participate with accelerators. And as we look at Power in the Cloud, I wonder if you can speak to both kind of the challenges that you have architecting it and the opportunities that customers get to be able to build new solutions that they can do in the Cloud that they might not be able to do elsewhere. I think one of the things with the Cloud aspect is, you know, we have, as you pull in some of these, it's interesting, I think, to see clients as you pull in sort of these new system level architectures, it's different, right? And the benefit of the Cloud is it provides simple access to infrastructure that they may not be willing to pull in on-prem, right? Some things like FPGA accelerators, some things like GPU accelerators. There's a lot of use in the Cloud because there's not familiarity and experience for an on-prem deployment. So I think Cloud brings a new access point in order to access sort of new and emerging technologies. And for on-prem, you know, there are lots of bold folks with IT departments that are extremely strong that are pulling that in, but from an accelerator standpoint, I think it's about allowing simple access to differentiated infrastructure that the Cloud provides. I think there was a misnomer. Some people thought, you know, Cloud is just going to be cheap, but, you know, it sounds like there's a lot of room for innovation and trying new and different things in the Cloud. And, you know, I think there's been a lot of, a lot of analyst reports around hyperscale data centers that going into the future, they will be the deliverers of innovation, right? To folks who want to leverage the newest and the latest. So we, you know, we just in our recent announcements for the expanded LC portfolio that we had on September 8th, we announced some collaborations that we had done with Tencent, right? So Tencent, one of the biggest telcos in China, and they deployed on using a Spark performance benchmark. And when they did it on Power, it was about them creating a using less hardware with better performance. You know, that's them going after a TerraSort benchmark that is different, right? That is different. They will deliver the newest and the best to the public with simplicity of access, right? So I think it is a misnomer. I think Cloud is really a new access point and I think it will deliver innovation and access to that innovation. So since you brought up China, you've kind of with open power created a monster potentially in China. And it's interesting, right? So China's becoming self-sufficient, doing their own chips, they've got their own version of Linux. I always joke, and it's true by the way, they have their own Wikibon. They have a Wikibon China, basically taking all of our content and translated it and here it goes, it's open source. That's what happens in open source. Innovation, unintended consequences, not necessarily a bad thing. What's IBM's play with that? You seem obviously very comfortable with it. You sort of, you know, sort of open up. I think China is a unique market. They have made decisions and they're looking to have domestic innovation and they have their own innovation agenda within China. And we in open power through the open power, we will participate and we will enable them to do that and leverage the best of power architecture to do so. So our partnerships in China are enabled in order to participate in the China market. Open means open. And we will do that. Partially open. I want to come back to something you were talking about earlier about your decision to support Little Indian. It's kind of a geeky term, but let's explain what that means. So from a binary compatibility standpoint, you talked about ISVs. How was that a game changer and what are the results been in the marketplace? Yeah, it has been essential, right? So just to step back on endian-ness for a moment. So it's really, endian-ness refers to how you store your data, whether or not you store your most significant digit first or your least significant digit first. And that's something that you want to get right, right? You don't want to mess that up. And converting from Big Endian to Little Endian is not simple from an ISV or a reporting application, right? So we did it in the hardware. So power in the hardware, in the processor, can support either Big Endian or Little Endian. Now the bulk of the ecosystem is Little Endian Linux. And so when we came out with Power 8, we put that technology into the processor so that we could support Little Endian applications. What that means is for scripting languages, those can be just moved over from Little Endian on x86, moved over and run on a power system. So that migration became much, much simpler. And for like a CC++, a recompile is required, but that will move over with a simple recompile and will run on power. So it was absolutely essential for us to do that in order to participate in the bulk of the Linux ecosystem out there. And it's funny, I was with a business partner just a couple of weeks ago in Denmark and they've done a lot of work to move their software. They have some software that they run and they moved it over from Little Endian on x86 to Little Endian on power. And I asked myself, so how did that go? And he looked at me like I was crazy, right? He's like, why would you ask that? I'm like, well, we get a lot of mixed views or on the thing he said, absolutely great. Absolutely great. I can't even believe you asked the question. So I think that technology shift for us was key. It has brought simplicity for us to participate in ISVs to leverage power infrastructure underneath their code. And has the success been, or where have you had success? Has it been workload specific across the board? So we are very focused and prioritizing in our ecosystem. We are focused on big data workloads, right? Everything in the Power 8 architecture and the power architecture is built to do data. I mean, our AIX workloads and our IBMI workloads, we grew up working on how to do data, right? That's what we do. And now we want to bring that capability to Linux workloads. So we're quite focused on open source databases. We're very focused on analytics. We're very focused on high performance computing. It's those workloads that can leverage the eight threads and the large cache size and the IO bandwidth that we have and the memory bandwidth. So we're prioritizing where power can make a difference and we're working on delivering on the ecosystem in order to play in those spaces in particular. So, and how has in-memory databases played in? I don't want to make too big of a deal. Steve Mills used to say, ever since we've had memory, we've had in-memory databases. He's like Bill Parcells. Don't put him into Hall of Fame yet. But nonetheless, in-memory databases have come back memories cheaper, the need for speed, real time has coincided with all these other technology trends that we see. So is that a tailwind? It's all about getting the data closer, right? You have to get that data closer to the compute. That's what it's all about at the end of the day. So clearly, right, we have DB2 blue. We have a strong play in SAP HANA. That's been a great play, right? Has it? Yeah, the flexibility that power brings to an SAP HANA deployment has been great for us. So we have great success with clients running SAP HANA on power. And that's a good business. It's not just the anti-oracle business. Are they always here, everybody talking HANA, HANA, HANA, but it's really happening in the marketplace for you guys. It's a great business and SAP wants it to happen. And they're making it happen. They're making it happen. That's absolutely right. They're making it happen. And the results are great for clients, right? The feedback has been outstanding for clients. And power brings differentiation there from flexibility and from performance. So that has been an incredible play for us. And in addition, right, capabilities like our CAPI technology, which is our I.O. that brings FPGAs and acceleration closer in, you can put flash behind that. Which, you know, since CAPI brings that accelerator into a shared memory space, that brings that very large memory space, right? A little bit slower but a whole lot cheaper, right? Right into your shared memory space. And you're doing atomic rights, persisting the data, eliminating the horrible storage stack. And, I mean, no I.O. call, right? I mean, the saving on instructions. Best I.O. is no I.O. is, he named all the said, another guy that you guys don't remember, but you probably heard the name. Stephanie, one of the things I've really enjoyed at this conference, walking the hallway, we say, you know, we're fortunate to live in a time that there's just so much going in, especially people like kind of the big engineering challenges and things we're solving. I'm curious, what's getting you excited? What makes kind of where we are now in technology so important? Yeah, I think one, I'm very excited about the open, right? The ability for everyone to participate has created such a, it's almost a much more rapid innovation cycle than we've ever seen before. And clearly my focus is on Linux, right? So open place key to that. I think also we're seeing the culmination of a change in the workloads that are running on Linux. So that's key, right? I think that's very interesting. We're seeing clients take Linux and the workloads that run on Linux to new levels, and that's driving new requirements on the infrastructure. So we're coming back to a place where, for Linux workloads, infrastructure matters because the workloads are driving that kind of requirement onto the infrastructure. And so as we come in, our ecosystem has matured, it's the right time for Linux on power. It's great, well listen, good luck with the keynote tomorrow, thanks so much for coming back and the queue, it was great to meet you. Thanks for having me, it's a pleasure. You're welcome. All right, keep it right there, everybody. We'll be back with our next guest. This is theCUBE, we're live from IBM Edge in Las Vegas. Right back.