 Live from the Frederick P. Rose Hall, home of jazz at Lincoln Center in New York, New York. It's theCUBE at IBM Z Next, redefining digital business. Brought to you by headline sponsor IBM. When we are here live in New York City for the IBM Z System Special Presentation of theCUBE, I'm John Furrier, co-founder of Silicon Angle with my co-host Dave Vellante, co-founder of wikibond.org. Dave, we are here with Catherine Porrini, Vice President of the Z Systems Technology. Welcome to theCUBE, great to have you. Thank you, I'm really glad to be here at the exciting day for us. We had a great conversation last night. I wanted to just get you introduced to the crowd. One, you're overseeing a lot of the technology side of it, you're involved in the announcement, but you're super technical and the speeds and feeds of this thing are out there. It's in the news, it's in the press, but it's not really getting the justice. And we were talking earlier on our intro about how the mainframe is back and modernized, but it's not your grandfather's mainframe. Tell us what's different, what's the performance, tech involved, why is it different, and what should people be aware of? Sure, so this machine really is unmatched. We have tremendous scale performance in multiple dimensions that we can talk through. The IO subsystem provides tremendous value, security that's unmatched. So many of the features and attributes of the system just cannot be compared to other platforms. And the Z13, what we're announcing today, evolves and improves so many of those attributes. We've really designed the system to support transaction growth from mobility, to do analytics in the system, integrated with the data and the transactions that we can drive insights when they really matter and support IT cloud delivery. So there's two threads that are out there in the news that we've wanted to pivot on. One is the digital business model. And that's out in the press release. It's all the IBM marketing and action. Oh, digital business, we believe it's transformative. That's pretty much something that's going to be transformative. But performance with the cloud has been touted. Hey, basically unlimited performance with cloud. Think of compute as a, not a scarce resource anymore. How do you guys see that? Because you guys are now pushing performance to a whole other level. Why can't I just get scale out, say, or scale out infrastructure, build data centers? What is this fitted with that mindset, or is it together? So there's performance in so many different dimensions, and I can talk you through a few of them. So at the heart of the technology in the system, we have tremendous value in from the processor up. So starting at the base technology, we build the microprocessor in 22 nanometer technology, eight cores per chip. We've got four layers of cash integrated on this, more cash that can be accessed from these processor cores than can compare to anything else. Tremendous value don't have to go out through IO to memory as frequently as you would have to in other environments. We also have an IO assist subsystem that has hundreds of additional processing cores that allows you to drive workload fast through that. So I think it's the scale of this system that can allow you to do things in a single footprint that you have to do with a variety of distributed environments separately. Coupled with unique security features, embedded encryption capability on the processor. PCI attached tamper resistance cryptography, compression engines, so many of these technologies that come together to build the system. So IBM went to the woodshed back and took all the good technology from the back room, cobbled together, because you guys have done some pretty amazing things in what they call proprietary days. It's been the mainframe back in the 60s, 60s, 80s and client server, a lot of innovation. So you guys, is that true? Would that be an accurate statement? You guys cobbled together and engineered this system with the best? Engineered from soups and nuts, from the casters up, we literally have made innovations at almost every level here in the system. Now it's evolved from previous generations and we have tremendous capabilities in the prior ones as well, but you see across almost every dimension we have improved performance, escape scalability, capability. And we've done that while opening up the platform. So some of the new capabilities that we're discussing today include enterprise Linux. So Linux on the platform, run Linux on many platforms, Linux is Linux, but it's even better on the Z13 because now you have the scalability, the security, the availability behind it and new open support, we're announcing KVM will be supported on this platform later this year. We have OpenStack supported. We're developing an ecosystem around this. We're announcing PostgreSQL Docker, Node.js support on the mainframe and that's tremendously exciting because now we're really broadening a user base and allowing users to do a lot more with Linux on the mainframe. So one of the big themes that we're hearing today is bringing in, marrying analytics and transaction systems together. You guys are very excited about that. One of, even the New York Times article referenced this. People are somewhat confused about this because other people talk about doing it. We go to Hadoop World, we talked Big Data, Spark, in-memory databases, SAP doing their stuff with HANA. What's different about what Z systems are doing? That's a great question. So today many users are moving data off of platforms including the mainframe to do their analytics and moving back on this ETL process, extract, transform, load. It's incredibly expensive, cumbersome, multiple copies of that data. You have redundancy, you have security risk, tremendous complexity to manage. And it's totally unnecessary today because you can do that analytics now on the system C platform, driving tremendous capability insights that can be done within the transaction and integrated where the transactions in the data live. So much more value to do that. And we've built up a portfolio of capabilities and some of them are announcing as part of today's event as well that can allow us to do transformation of the data, analytics of that data. And it's at every level, right? We have embedded analytics accelerators in the processor, a new engine we call SIMD, Single Instruction of Multiple Data, allows you to do a mathematical vector processing. Let's drill down on that. I want to take a quick take on this. The in-processor stuff is compelling to me. I want to drill down on that technical. Right now all the rage is in-memory. Even on the big data spark, it's got traction for the analytics. DTL thing is a huge problem. I think that's 100% accurate across the board. We hear that all the time. But what's going on in the processor? Because you guys have advanced, not just in-memory, it's in-processor. What is that architecture? What are some of the tech features and why is that different than just saying, hey, I'm doing a lot of in-memory. So the processor has a deeper, richer cache hierarchy than we see in other environments. That means we have four layers of cache. Two of those cache layers are embedded within the processor core itself. They're private to the core. The next layer is on the processor chip and it's shared amongst all those cores. And the fourth layer, I heard it, right, is on a separate chip. It's huge. It's embedded DRAM technology. It's a tremendously large cache. And we've expanded that, which means you don't have to go out to memory nearly as frequently because you have that data. Which is state-of-the-art. That's state-of-the-art today. In-memory is state-of-the-art today. You guys have taken it. Advanced inside the core. What kind of performance does that do? What's the advantage? There's huge performance advantages to that. We see we can do analytics. Numbers are something like 17 times faster than comparable solutions. Being able to bring those analytics into the system for insights when you need them. To be able to do faster scoring of transactions. To be able to do faster fraud detection. With so many applications, so many industries are looking to be able to bring these insights faster, more co-located with the data, not have to wait the latency associated with moving data off and doing some sort of analysis on data that's stale. How? That's not interesting. We really want to be able to integrate that where the data and the transactions live and we can now do that on the main. So in-memory obviously is awesome, right? You can go much faster. A best I.O. is no I.O. as Gene Amdahl would say. But if something goes wrong and you have to flush the memory and then reload everything, it's problematic. How does IBM address that? So to minimize that problem. Relative to what you hear, you hear complaints in other architectures that that's problematic. How do you solve that problem? Or have you solved that problem? Well, I think it's a combination of the cache, the memory, and the analytics capabilities, the resiliency of the system. So you worry about machines going down, failures. And we've built in security, reliability, redundancy at every level to prevent failures. We have diagnostic capabilities, things like the IBM Z-Aware solution, right? This is a solution that's been used to monitor the system behavior so that you can identify anomalous behaviors before you have a problem. That's been available with ZOS. Now we're extending that to Linux for the first time. We have solutions like disaster, recover, continuous availability solutions like the GDPS. It's now extended to be a virtual appliance for Linux. So there's so many features and functions this system allow you to have a much more robust, capable solution. Popular is Linux. Can you quantify that you guys talk a lot about Linux? Can you give us some percentage? Linux has been around for 15 years on the mainframe. And we have a very good user adoption. We're seeing a large fraction of our clients are running Linux, either all by itself or in concert with ZOS. So double digit workloads? Yeah, it's a very significant fraction of the myths in the field today. Catherine, I want to get a personal perspective from you on some things. One, you went applied physics degree from Yale, masters from applied physics from Stanford, PhD, applied physics from Stanford. And all the good graduations, by the way, you're super smart. I guess it means you can go to the schools, it means you're smart. But the rage is software defined, right? So I want you to tell us from your perspective, being an applied physics, the advances in silicon is really being engineered now. So is it the combination of that software defined? What's your perspective? What should people know about the tech at the physics side of it? Because you can't change physics. At the end of the day, the silicon is doing some good stuff. So talk about that convergence between the physics, silicon and software defined. That's a great question. So I think what sets us apart here with the mainframe is our ability to integrate across that stack. So you're right, silicon, silicon, piece of 22 nanometer silicon, we can all do similar things with it. But when you co-optimize, what you do with that silicon, with high performance system design, with innovations at every level, firmware, operating systems, software, you can build an end to end solution that's unmatched. And with an IBM, we do that. We really have an opportunity to collaborate across this stack. So can we put things in the operating system? It can take advantage of something that's in that hardware. And being able to do that really gives us a unique opportunity. And we've done that here, right? Whether it's the SIMD accelerator and having our software capabilities, our C-Plex optimizers and Java be able to take advantage of what's in that, in that microprocessor. We see that with new instructions that we offer here that can be taken advantage of compilers that optimize for what's in the technology. So I think it's that co-optimization across the stack. You're right, as a user, you see the software, you see the solution, you see the capability at the machine. But to get that, you need the infrastructure underneath it. You need the capabilities that can be exploited by the software. And that's why that- And we're seeing that in DevOps right now with the DevOps movement, you're seeing I want to abstract away the complexities of infrastructure and have software be more optimized. Here, you guys are changing the state of the art in memory, in processor architecture. But also you're enabling developers and software to work effectively. Right, and I think about cloud service delivery, right? You know, we would love to be able to offer end users IT as a service so we can access the mainframe, all those qualities of service that we know and love about the mainframe without the complexity, and you can do that. Technologies like ZOS Connect and Bluemix with system Z, mobile-first platform allowing you to connect from systems engagements to six systems of record to deploy Z services. So you can, we're trying to help our clients to be able to not be cost centers for their firms, but to provide value-added services. And that can be done with the capabilities on the mainframe. So Node, Docker, OpenStack, KVM, obviously we talked about Linux. What does that mean from a business standpoint, from the perspective of running applications? Can you sort of walk us through what you expect clients to do or what clients are doing? It's all about standardization and really expanding an ecosystem for users on the platform. We want anybody running Linux anywhere to be able to run it, run their applications, develop their applications on the mainframe, and to be able to take advantage of the consolidation opportunities driven by the scale of the platform. Be able to drive unmatched end-to-end security solutions on this platform. It's a combination of enabling an ecosystem to be able to do what users expect to be able to do. And that ecosystem continues to evolve. It's very rapidly changing. We know we have to respond, but we want to make sure that we are providing the capabilities that developers and users expect on the platform. And I think we've taken a tremendous leap for the Z13 to be able to do that. So obviously Linux opened up that was the starting point. Right. What do you expect with the sort of new open innovations? Will you pull in more workloads, more applications? I certainly believe we will. And new workloads on the platform, this is an evolution for us. We continue to see the opportunity to bring new workloads to the platform. I think support of Linux and the expanding ecosystem there helps us to do that effectively. We see that whether it's the transaction growth from mobile and being able to say, what does that mean for the mainframe? How can we not just respond to that, but take advantage, enable new opportunities there? And so I think absolutely Linux will help us to grow workloads to get into new spaces and really continue to modernize the mainframe. John and I were talking at the open, Paul Moretz at the time CEO of VMware in 2009 said, we are going to build a software mainframe. Interesting, very bold statement. Do you have a software mainframe? Have you already built it? I don't think you can have software without running on something. And so the mainframe is not a piece of hardware. The mainframe is a solution, it's a platform that includes technology, infrastructure, hardware and the software capabilities that run on it. And as I said, I think it's the integration, the co-optimization across that really provides value to clients. I don't know how you can have a software solution without some fundamental infrastructure that gives you the qualities of service, the so much of the inherent security availability, all of that is- That's a marketing, it didn't pan out, the vision was beautiful, it put a great power point together. Honestly, he went to Pivotal now, but I think what's happening is what you're talking about is it's distributed mainframe capability. The scale out open source movement has driven the want-to-be mainframe market to explode. And so what now, you look at Amazon, you look at Google, look at these power data centers, they are mainframes in essence. They are centralized, physical places. Yeah, why is it in pan out? I mean, it's moving in a direction. Well, VMware want to say the cloud is a software mainframe. Software runs on these data centers, so instead of having Rackinstack 3x86 processors, you just drop in a mainframe, or Godbox as I call it, and you have this monster box that's highly optimized, and then you can have clusters of other stuff around. But your argument is the integration is what makes the difference, that end-to-end. So Amazon makes their own gear, right? We know that now, they don't do open compute, they're making their own gear, so people who want to be Amazon would probably go to some hybrid mainframe like. Well, they're not making their own. How do you make sense of that? Because Amazon, I mean, they purpose-built their own boxes, they are building their own mainframe. To a point, though, right? I mean, to the outside of the box. Right. The way I see it is for mission-critical applications where you cannot support any downtime. You want to have a system that's built from the ground up for pure availability, for security. And we have that, right? We have a system that you can prevent failures, right? We have redundancy at so many levels. We have, you know, it's a different model. Right, when you take money out of your account or when you transfer money more importantly into your account, you need to make sure it's there, right? You want to know that with 100% confidence. And to do that, I would expect you feel more confident running that. And credit card transactions. It's not just banking. It's the same game all over again, mission-critical versus non-mission-critical. I mean, internet of things. Well, what's not mission-critical? Is my follow-up question here. I agree with you. I agree with you. Internet of things, some sensors data that's passive. If it's running my airplane, that's mission-critical. Is Nest running your temperature? Oh, you're down for 10 minutes? I mean, yeah. There are some times that we would accept some downtime? No, it's really about lumpy SLA perform. Amazon gets away with that because the economics are fantastic, right? So non, you can't be lumpy in bank transactions. So, Catherine, what about cost? Everybody says, oh, mainframe's so expensive, so expensive, you guys put out some TCO data that suggests it's less expensive. Help us squint through that. What's the... Yeah, so I think when we look at total cost of ownership, we're often looking at the savings to administration and the management of the complexity of sprawl. And with the mainframe, because you have such scale and what you can include in a single footprint, you can now consolidate so much into this literally very small environment. And the cost savings, because of the integration capabilities, because of the performance that you can contain within this box, you see end-to-end cost savings for our clients. And that break-even point is not so large, right? And so you talked about mission critical. You're doing your mission critical work on your mainframe and you have other things that you need to do that you don't consider, perhaps, as mission critical. You have an opportunity to consolidate. You can do that all on the same platform. You're not, we can run with tremendous utilization. You want to use these machines for all their work. So the follow-up on that. So the stickiness then, a.k.a. lock-in, used to be I got a bunch of cobalt code that won't run anywhere else. He got me. I got to keep buying mainframes. You're saying now the stickiness is for the types of workloads that your clients are running, it is cheaper. That's your premise. It's cheaper and I think it has unmatched capability. Availability security features that you can't find in other solutions. In theory, you could replicate it but it would just be so expensive with people. In theory, okay. But I think the fundamental technologies and solutions across that stack, who else can do that, right? Can integrate solutions in the hardware and all the way up that stack. And I don't know anyone else who can. Tell me what, tell me in your opinion, what gets you most excited about this technology platform? I mean, is there a couple of things? Is there one thing saying that is so game-changing, I'm super excited by this, I can't sleep at night, I'm intoxicated, technically. I mean, what gets you jazzed up on this? Well, I'll tell you, today's a really proud day. I have to say, being here and being a part of this launch, personally having been a part of the development, been an IBM for 15 years, I spent the last eight years doing hardware development, including building components and key parts of the system. And now to see us bring that to market and with the value that I know we're bringing to clients, it's, I get a little choked up. I truly honestly feel really, really proud about what we've done. So in terms of what is most exciting, I think the analytic story is incredibly powerful and I think being able to take a bunch of the technologies that we've built up over time, including some of the new capabilities, like in database transformation and advanced analytics that we'll be continuing to roll out over the course this year, I think this can be really transformative and I think we can help our clients to take advantage of that. I think they will see tremendous value to their business. We'll be able to do things that we simply couldn't do with that old model of moving data off and having the latency that comes with that. So I'm really excited about that opportunity. So this is a modernized platform, not just a repackaging of mainframe. Absolutely. Okay, great. So second final question for me that I want to ask you is two perspectives on the environment, the society we live in. So first let's talk IT, CIO, CEO. What mindset should they be in as this new transformation, the digital businesses upon them, and they have the ability to re-architect now with mainframe and cloud and data centers? What should they be thinking about? As someone who is a PhD in applied physics been working on this killer system, what is the, what's the moonshot for that CIO and how should they be thinking about their architecture right now? So I think CIOs need to be thinking about what is a good solution for the variety of problems that they have in their shops and not segment those as we've often seen. You have the x86 distributed world and maybe you have a mainframe this and that. I'm thinking to think about this more holistically about the set of challenges you need to go address as a business and what capabilities do you want to bring to bear to solve those problems? I think that when you think about it that way, you get away from good enough solutions, you get away from some of this mindset that you have about this only plays over there and this only plays over there. And I think you open yourself up for new possibilities that can drive tremendous value to their businesses and we can think differently about how to use technology, drive efficiency, drive performance and real value. We were talking last night at dinner, we all have families and kids. And there's a lot of talk about software driving the world these days. And it is, software's amazing. It's the best time to be a software developer since I've been programming since I was in college and it's so awesome with open source. However, there's a real culture, hacker culture now with hardware. So what's your advice to young people out there? Middle schoolers or parents that have kids in middle school for women, young girls, young boys, with this now you got drones, you got hackers, Raspberry Pi, these kinds of things are going on. You got kind of this homebrew computer mindset these young kids and they don't even know what Apple computer is. I would say it is so exciting. The engineering world, the technology challenges, hardware or software and I wouldn't even differentiate. I think we have a tremendous opportunity to do new and exciting things here. I would say to young girls and boys don't opt out too soon. That means take your classes, study math and science in school and keep it as an option because you might find when you're in high school or college or beyond, that you really want to do this cool stuff. And if you haven't taken the basics, you find yourselves not in a position to be able to team and build great things and deliver new products and provide a lot of value. I think it's a really exciting area and I encourage people. It's a resurgence, I'm seeing like this. I mean I went to the 30th anniversary for Apple's Macintosh in Cupertino last year and that whole homebrew computer club was a hacker culture, you know the misfits if you will and yeah. Over camp. I think there are people who grow up and always know that they want to be the engineer, the software developer and that's great. And then there are others of us and I'll put myself in that space that you may have a lot of different interests in what has drawn me to engineering and to the work that we do here has been the ability to solve tough problems, to do something you've never, no one has ever done before, to team with fantastically smart people and to build new technology. I think it's an incredibly exciting space and I encourage people to think about that opportunity. From a person who has a PhD in applied physics, that's awesome. Thanks for joining us here inside the QVP of these systems. Again, great time to be a software developer, great time to be making hardware and solutions. This is the Q, we're excited to be live in New York City. I'm John Furrier with Dave Vellante. We'll be right back with this short break.