 Live from Las Vegas, it's theCUBE. Covering Edge 2016, brought to you by IBM. Now, here are your hosts, Dave Vellante and Stu Miniman. We're back, Doug Baylog is here. He's the general manager of IBM Power Systems. Doug, great to see you again. It's always great to be here. I try to prioritize you guys when we're together in a conference. Well, really thrilled that you could make time for us. I know you're super busy, just back from AP. And we're going to talk about that a little bit. But big show for you guys, I mean, power is really, we've talked about it in previous, you know, CUBE sessions, but it's really starting to hit the knee of that curve, isn't it? It really is. I mean, the strategy I called three years ago to kind of transform the power business around really three tenants. One is grow the capabilities power had for big data and now even more extend that to machine learning, deep learning and cognitive and get power into the cloud and then sort of underpinned and openness. We really are seeing sort of that take off here now. You see a lot of that new content being announced there at Edge and that's why I'm so excited to be here this year. Yeah, so give us what's the, we saw Scott now up on stage today. You guys got a partnership with Hortonworks. What else you got going on? Yeah, so, you know, this ecosystem we've been building. I mean, remember three years ago, we started like a nothing, right? We had no Linux on power ecosystem. And in a couple of years, we now have thousands of ISVs on the platform, tens of thousands of open source packages and new partnerships that we saw being announced here today. So Hortonworks is a great add to the portfolio. Morantis as sort of an independent open stack provider. You see Red Hat talking about doing more on power and Ubuntu releasing their latest release simultaneously on power. So I'll tell you what, it feels good to see kind of the ecosystem now building on itself and bringing that kind of differentiation to the marketplace. Is it true Mark Shuttleworth was walking around here? Yeah, Mark was here. Absolutely, yeah. Very cube alum. Yeah, they would have grabbed him if he was in the field. Absolutely. But you know, some players starting to step up, right? I mean, it was the band of five originally. That's exactly right. But some good names in that band. Yeah, from an open power, like you said, it was the band of five, IBM, Google, Melanox, NVIDIA, and Ty and a motherboard company. And now we're 50 times bigger with over 250 members here a couple years later. So it's incredible innovation. And of course, clients have always asked, so really cool story around open power, Doug, but what does it mean to me? And so two weeks ago, we announced our new lineup of Linux class servers, the LC lineup with the family of three. All of those servers and the innovation of those were developed with our open power partners. So you got NVIDIA, you know, technology, NVLink burned into it. We did a unique chip with NVIDIA for the machine learning marketplace. And so that innovation is now available. We got time to market advantage. We got differentiate technology and some great examples down on the floor of how ISVs are already taking advantage. In NVIDIA hot company, going hard after machine learning, we're doing something next week with them down at Strata. We're also doing something with IBM. Monday night, NVIDIA, Tuesday night, IBM, we got a little party with each of you guys. So that's going to be fun. We'll bring this together because we've got a great partnership going on there. Doug, there's some really cool innovations happening. I was just checking through the Twitter stream. People are getting excited. Things like a prototype for TensorFlow. You talked about the machine learning, you know, talk about some of the, you know, interesting use cases that you're seeing on power. Yeah, absolutely. So, you know, we have bet big on acceleration, right? We've been kind of very clear on that for a number of years that, you know, we see a post Moore's Law world where it's a really good general purpose processor married together with a set of acceleration technology. And so, you know, originally when we launched Power 8, we talked about CAPI capability that coherent attached processor interface for things like FPGAs, right, Xilinx technology. You know, you can think of Xilinx really good. It's sort of that ingest and data analysis kind of mode. We did things with flash technology attached to CAPI to kind of expand that memory space from what you classically could get in a server to, my goodness you add another 40 terabytes of memory and it makes a huge memory footprint. Eliminate IO. Well, in RedisDB was one of the first ones to jump on, take an advantage of that CAPI expanded flash from an in-memory perspective, reducing the footprint for about 24 to one of an Intel server now onto a Power server. So great sort of big data use case. And then finally now with this NVLink technology, you see some of the classic HPC kind of workloads in the research and university space. You know, this burning of NVLink to a chip is part of the path to coral that national labs when we had, we talked about in 2014 for next generation supercomputers for some of the Department of Energy National Labs. But moving it out of sort of that unique space of HPC, one of the ISVs down on the floor here today is a company called Connecticut. They have written a brand new relational database from the ground up that always assumes GPUs are there. There's no model that they run without GPUs. And so what they've been able to do working with retailers is completely help them optimize their supply chain to make real-time decisions because we all know retail is such a razor-thin business. And there's some large retailers that I can't name yet that are deploying Connecticut with our new HPC systems with NVIDIA processors. So some really neat use cases around supply chain optimization. So that's it, I know you can't name them, but talk a little bit about, I mean, everybody, all the retailers have their Amazon war room. They're all trying to figure out how to compete, how to use their physical presence to compete online. How are you fitting into that? And where does real-time fit in that? Yeah, so I mean, you know, it's a matter of taking the data that existed to get already in a relational database, moving it into this accelerated relational database so that, again, analytics could be done on a much faster basis because, again, in their retail space, the buy and selling, I mean, the bidding, you know, if you're not catching it at the right point, you're losing pennies on the dollar, right, which adds up over time. So that's where they're seeing as their advantage, is speed to decision-making. So you're just back from Singapore. What's going on in Singapore? I said I hadn't been there in decades, but what's new there? What are you guys doing there? I mean, first off, I love Singapore. It's a phenomenal city, very modern. We were talking before we got on air. Formula One race was there this weekend. You know, we heard from Red Bull here on the main stage of how analytics fits into the whole Formula One model of a hundred sensor points, all that tons of data coming in. They're using power plus flash technology at Red Bull to do that analytics. So, you know, power is front and center there, at least in the Red Bull case in the Formula One model. But even on the broader case in Singapore, you know, Singapore is a big, big shipping port. And we as a company in IBM are very much a believer in this next generation technology called blockchain, this hyper ledger distributed ledger model. You know, and I had heard about the port of Singapore starting down this path of a beta project, proof of concept around really the customs. How fast can, how can they help speed up bringing a ship into the harbor, you know, with the ledger they have, a what's on board, and then quickly moving that out and taking care of that. You know, interesting story to hear. Well, then you're there in person and you look out over the water landscape and it's like, oh my goodness, look at all the ships sitting out there with cargo waiting their turn. I mean, it's just time to money, sitting there on those ships. And so for me, it was a great visualization of what we're trying to do in a blockchain world. And they're waiting for somebody on the docks to push paper? Is that right? Oh yeah, it's a whole paper pushing model, right? Of you take the ledger, or what are you going on board, it's got to come in, they got to then validate it and check it because it's all manual. I mean, it's just a lot of manual intervention today that can be completely optimized in a blockchain world. So wow, three years, I mean, this wouldn't have been possible without the open power initiative, right? Right. I mean, it's the core to what we do. I mean, both through the ODM server partners we're working with that are part of it, through the chip and system companies that are part of the software that's part of it. I mean, thank goodness we did this three years ago. Wish we'd have done it three years before that, but in the three years we have, we now see Linux on power being, you know, over 10% of the hardware revenue that I generate in the power business. So it's become material for me. And then we heard, we were talking earlier about, we heard a number, you want 20% of the market by the end of the decade, right? Yeah, Tom and I have some aspirational goals of what we'd like to achieve, but yeah. And that's the ecosystem, right? Yeah, that's not just us, that's sort of, you know, power footprints, power seats, if you want to call it in the marketplace. But it comes down to if you're going to be viewed as a viable player in any market, you got to have at least double digit share. And so for us, nothing but upside is how we see the Linux server market today. Where does China fit in to this? You spent much time in China. I was there in China last week as well too, right? As I do my world tour. Life of an IBM exec, it's a beautiful model. Yes, yes. But so China really picking up on the open power, you know, concept and driving it hard, becoming self-sufficient in chips. Yeah, I mean, they're a whole model and we certainly understand it, we appreciate it as to become more self-sufficient, you know, whole indigenous IT model. And that for them is everything from chips, to systems, to software, to partners that will bring that innovation to the many, many clients around the China market. So we've been working with our chip partner over there, Suzo Power Core. They now have their own power chips in the marketplace. Server companies like Zoom and in the near future, InSperr are building servers based on power chips. Software ecosystems have been built. So, you know, a couple of examples of wins over there, Tencent, very large, you know, hyperscale cloud company over there, right? One of the bats, as we call it. Yeah, yeah. You know, about a couple months ago, they talked about evaluating spark technology for analytics on the power platform. Now, as we sit here today, two months later, whatever it is, they're now actually deploying Spark on Linux on Power in Tencent. So, you know, this whole story we've been telling around the benefit of big data analytics on Linux on Power and how we can play, it's playing out there live in that one client, Tencent. China Mobile, large telco in the world, also deploying Linux on Power in their core infrastructure. So, we're starting to see the effect of it for sure. You've architected Power to attack the Achilles heel of X86, right? And so, coincidence by design, a little bit of both. You know, you got to be good and lucky at the same time, right? I mean, that's sort of the best model. You're right. I mean, X86 Achilles heel has always been its weakness in memory bandwidth and IO, right? And that's where Power always has shined. And what we needed to do is sort of put a new code of pain on it and position it for the open space. Because just with AIX, we weren't going to pick up, or IBM, we weren't going to pick up all the new innovation around our open source stack of software. And now that we've unleashed that, we've got a bursting ecosystem, and we've got clients that are seeing the benefits that we've known existed there for years. Well, it just so happened to coincide with the whole big data meme. Yes. Which has been a tailwind. Yep. HPC meets big data. Yep. Right, it hits commercial. And that's why we're so excited about sort of the next wave of that, which is cognitive, right? Not only does Watson run on Power Core, as you know, as the core, but now these, you know, booming frameworks around cafe torch, you know, tensorflow, and how we can optimize those. So I think we're poised well with our accelerator commitment to catch the next wave here. Doug, you know, we talked about, you know, virtualization. Yeah. We've talked to some of your peers here about, you can get dentar virtualization. We can also put bare metal. We've, I believe we talked last year a little bit about containers too. Where do you see some of the infrastructure trends driving power? Yeah, there's no doubt. Power has a great capability around virtualization. You know, I'll give you an example. So, you know, SAP HANA now available on the platform for over a year. Fastest growing platform now for HANA in the SAP portfolio. Well over 200 clients at the point. We were the first platform to be supported for virtualization on HANA. It comes back to our security capability around our virtualization, but also the isolation of that virtualization. You know, you call it in the industry sort of the noisy neighbor syndrome. So if you start sucking up a lot of resources, we don't want David affected, right? And we have that capability in our power VM technology. And once we showed that to SAP, they were more than excited to have a capability where you can run four HANA instances on a power platform and now scaling beyond that. So our virtualization story is sound. Containers, I think we're still sort of at an early cusp of containers. A lot of clients looking at trying to figure out how do you move workloads around using a container-based model? We support containers based on, you mentioned Mark Shuttleworth, our work with Mbuntu. So whether it be containers or virtualization and then of course bare metal. I mean, all modes are in play these days. And IBM's always done a good job of getting into all the verticals. How are you doing with all the various certifications that you need for these kind of solutions? Yeah, I think pretty well to be honest with you. I mean, a part of building out the ecosystem and making sure you kind of also have an industry view and approaching it as an industry view. I think we're doing a lot of work still sort of at cross-industry layer. More work to do is sort of building out on top of the databases that are much more industry aligned. But we're seeing wins across all industry with our Linux on power play. You guys compete always in IBM, always in the basis of business outcome. But you're in the server business. Yeah, right. Sometimes it gets nasty. But with the decision to go a little endy now you've got a new sort of Dell EMC. You've obviously, you've got Oracle trying to compete in a, I guess in a quasi similar way with you guys, but much different as well. So talk about how you compete, where you win, where you need to close some gaps. Yeah, so you sort of have the Unix market, which of course we're the number one provider in by a long shot in that market. But we know that market is on a glide slope of a shrinking market, right? As clients look for more open solutions. But I never want to leave that out because that's my install base today. And I always want to know my, you know, my IAX, IBM IAX install base. We are with you for a very long time because we know you count on us to run your firms. The investment area around Linux has been super important. And that is where we're competing every day, you know, against the likes of the, you know, the now shrunken HPE, the now growing Dell and the integrated Oracle stack, which, you know, they want you to, you know, they want you to take all of it and that's the way they want you to like it. So, you know, I'd say each one of our competitors has a bit of a different strategy. You know, as I mentioned, Oracle has a, you take my app, you take my middleware, you take my database, you take my hardware, you take my flash technology, you take my Linux distro, and that's how you buy it. You know, I don't know if clients really want it all and apply its model with one vendor and then, you know, what happens next time they try to re-op? Well, it's funny, I mean, IBM in the day would compete with that approach. Oh, listen, we'll provide all the integrated pieces, but we still give clients flexibility of choice, I mean, so we'll optimize DB2 Blue on power, right? We'll optimize WebSphere on power, we'll optimize Flash on power, but we don't make the client buy it that way. We give, listen, if you don't like one of those pieces, we're going to try to convince you, but if you want to swap one of those pieces out, we'll support that. Yeah, I mean, if I had a compare and contrast, you tell me if this is fair, you're right, Oracle says, here's the stack, take it or leave it, right? Oh, and by the way, we can support that other stuff, but nobody does that, at least not that I can find. You don't lead that way anymore with the integrated stack, at least from what I can tell, right? You lead with business outcomes, I mean, that's kind of how you guys converse. Well, it really leads with listening to the client in terms of, you know, what kind of data problem or cloud problem or what kind of problem are they trying to solve, right? In many cases, at least based on being a hardware seller, right, a hardware guy, they've made the decision on the software they're going to run already. So, you know, Florida Blue, they've made the decision on Mongo, right? And coming in and trying to convince them not to do Mongo is sort of not the best play for me. It's coming in and saying, listen, you want to run Mongo at twice the performance at a better price point than Intel, we're your team to work with. So that's kind of how we approach this marketplace. Listen to the client, what stack have they picked already? You know, coming from a line of business or however they chose, and then provide the best infrastructure to run it on. Find the sweet spot workloads, make them run better, faster. You know, and for us, you know, to simplify this for clients who always ask me, so where should I start Elinix on power? I'll tell them, listen, HANA is a great play. I mean, we have tremendous value around HANA and our ramping clients that big data, you know, and analytics and machine learning, deep learning, high performance computing, right? Now with our re-entrance back into that space with a different design point, that's a big set of the market right there. And we've got huge differentiators versus commodity play of an Intel server. Excellent. All right, Doug, we got to go. Thanks very much for stopping by. Always a pleasure. Always great to see you guys. Thanks for your time. All right, keep it right there, everybody. We'll be back with our next guest right after this.