 Live from Las Vegas, Nevada, it's theCUBE at IBM Interconnect 2015. Brought to you by headline sponsor, IBM. Okay, we are back live inside theCUBE. This is SiliconANGLE's flagship program. We go out to the events, extract the signal from the noise. I'm John Furrier, the founder of SiliconANGLE. Showing my co-host, Dave Vellante, go for a new Wikibon research. Our next guest is Jeff Fry, CTO, SystemZ platform with IBM, welcome to theCUBE. Thank you. So SystemZ is the big event we covered in New York City at the Jazz Center, which is phenomenal. It's the mainframe, huge, I call it the God Box, but it's California term, I guess. It's powerful, it's a mainframe system, but no one calls high-performance computing mainframes anymore, they're just big machines that kick ass, basically. So what's under the hood? What is the mainframe evolution to this modern world? People still probably have mainframes out there, big banks and whatever, but right now it's about speed, I need in-memory, I got all this stuff going on, large data, tsunami data, what's the innovation, what's under the hood? Well, I think that a traditional value of the mainframe is the more work you put on it, the better it performs. It's a large, shared pool of resources that's got the best, fastest single-thread processor speed in the industry, lots of memory. We focus on a balanced system design, so processor, memory, bandwidth, internal buses, I.O., it all comes together to provide ultimate scale and performance, and with the Z13, we've taken another turn on that crank, with absolute best single-thread performance in the industry. I think what's interesting about this mainframe is that we really focused on compute performance, so the traditional performance. Give an example of, for the order of magnitude, like, okay, I get the single-threaded thing, that's great, but performance, what does that mean? Well, we have a 5.0 gigahertz single-thread core, which is probably almost double what most distributed platforms have in terms of single-thread performance, and while a lot of competitive server platforms have traded off single-thread performance for more cores or more parallelism scale out, we haven't given up on that. Everybody's hitting the laws of physics, right, and in terms of the rate at which you can improve performance in these single-threads, but we're pulling every trick we know out of the book to ensure that we deliver industry-led performance. So, you focused this time around on compute performance as opposed to IO? Well, I think there's a well-earned reputation for Z being a high-volume transaction processing engine, data-serving traditional workloads, but I think there had been a view that if you want to do numeric-intensive or compute-intensive or statistically mathematically-intensive workloads, you run those somewhere else. And with the advent of our objective to get more predictive and prescriptive analytics to the platform, with this machine in particular, we focused on mathematical performance of the engines. So we introduced vector processing, single instruction, multiple data, lots of memory, multiple threads per core, and that all contributes to computing. I see what you're saying. So, yeah, the mainframe you don't think of as traditionally as your high-performance computing system, should we start thinking about it as a high-performance? You know, I think there are extremes, right, in terms of high-performance computing, but it's fair to, I think, recognize that commercial, traditional, even traditional transaction processing is becoming more compute-intensive. The language runtimes are making more use of compute performance rather than just moving data around. And I think it's really part of the core strategy here to use tools like, you know, IBM's SPSS, right, modeling tools, which generate these, you know, these statistical models, and those things are mathematically and computationally intense, so we got to make sure we run those, you know, best of breed on the platform. Got a long history at IBM, we're talking off camera. When you saw the virtualization trend occur. Yeah. What did that feel like to you? I'm going to say the virtualization, I mean, VMware specifically. Yeah, I thought you might have been talking about 1967 when we invented it. Right, right, yeah. That's what I mean. Yeah. So did you just look at that and shake your head or say, boy, that was a missed opportunity or what was going through your mind at that time? Well, you know, I think that, yeah, a little bit. I mean, we, you know, you had mentioned, we were talking a little bit off camera, you know, Z tends to be, you know, kind of the workhorse that just, it kind of works in the back room and you know, it doesn't break so it doesn't get a lot of attention, right? And people, I think there are some that just aren't completely aware, completely up to speed on the technologies that have been in this platform for quite some time. Now the guys at VMware have done a fantastic job, right, of taking the x86 environment and virtualizing it. We still have, in my opinion, the best-to-breed server virtualization. The benefit that we have is that it's in the hardware, right? That level of integration is the way we get near native performance out of our engines even when they're virtualized and highly virtualized. I think the thing with virtualization that we need to be focusing on now is not just whether or not the processors are, you know, are effectively utilized or how much work you can consolidate on the platform because that's a strength at Z. We can take, you know, 10, 20 cores worth of x86 and run it on a single engine on Z, right? And that's been a strength of our virtualization. And I think now with cloud and with all this dynamic provisioning and software-defined environments, I think the key to virtualization is in the management, right? So to have a really good, effective, standardized way to do virtualization management is really the key. And so when you talk about relative to x86, you've got an advantage because it's in the hardware and firmware. That's performance and utilization factor. Your virtualization tax is not as great. It's really minimal. It's de minimis. Yeah, like I said, we run near native speed. Yeah, and VMware, you're right, has done a very good job, but I wanted to ask your opinion on something. So in 2009, I listened to Paul Moritz. He was at an investor financial analyst meeting and he said, we are building a software mainframe. And I was taking notes and I went, that's interesting. Now they don't use that term anymore. There's marketing people at VMware who said to stop saying software mainframe. It's not, we don't want to market it that way. But what it was- Well, what he meant was it's all software team world. That's the beginning of Moritz. And he understands technology. So that really caught my attention. I said, wow, okay. Never goes down, it's just global sysplex. I thought about that. Runs any workload, any app, highly virtualized. Has VMware in your opinion built a software mainframe? No. Tell me why. Well, I think the part of what makes a mainframe a mainframe is the economies of scale. As I said earlier, the thing actually runs better when you put more work on it. So when other competitive systems can drive utilizations to 80, 90, even 100% with consistent response times across all of the workloads, then maybe they'll have some of the mainframe characteristics. I think what he was referring to, which I happen to agree with, is this notion of software defined environments. I mean, I think the point I made earlier, virtualization isn't just about having fast or utilized hardware anymore. It's about using virtual resources as the foundation for all of your operational environment. When you virtualize a system, it's dynamic. It can be programmed. You can grow it. You can shrink it. You can dynamically instantiate it. You can make it smarter. And these are the things. That's the holy grail. Yeah, this is what software brings. So if we can make hardware look more like software, server storage and network, holy grail. Holy grail, holy grail. So I want to get your perspective on this. It did. We've talked all the time on theCUBE. The software made by Maritz, a great vision. 2010, what happened next? It was like, it stalled. It was a Picasso that he painted, the Mona Lisa of tech. Okay, beautiful. The problem was the market wasn't ready for you. All the stuff going on the stack, then they go to Pivotal. So they bring everything over to Pivotal, the couple, nice little concept. But they're still working on the virtualization piece on the network layer. So there's still work to be done. So I think it's storage, by the way. Yeah. Right? I think the moonshot is software mainframe. And then it's just not there yet. The industry. Yeah. Yeah, so you guys have it. But Steve, you're saying utilization is not the battleground anymore, certainly for you. But when you look at the real utilization of a VMware environment. Yeah. They talk 20, 30, maybe 40%. They talk way over 50, it's no way, not even close. David Floyer knows, our David Floyer knows this stuff. Certainly sub 30%. Yeah. Right? So for the balance of the industry, it is still a warm market. Yeah, yeah, yeah. I don't mean to suggest that's not important. What I'm suggesting is. For you. Next, what's next? And what's next is to make use of virtualization as the foundation for all operational control of the environment. And aren't you doing that today? Yeah, we do a lot of that today. But I think we have focused on the mainframe on the traditional core value proposition of our virtualization being utilization and performance and scale. Right? You'll see us do even more than we do today in the way of optimizing underlying resources based on the needs of the workload. Right? That's another strength of ours, right? Is that we can assign service levels to the work and then have the operating system and the underlying hardware and virtualization layers optimized to the business importance of the work. So what's your take on Docker? Docker has this model. And we just heard Nancy Pierce talk about the cloud, new division, all the IBM resources coming together. And then probably going to just jam, just run like the wind. Now you bring a different perspective in technology with the mainframe. And you've been working under the hood. Share with the audience what's the modern stuff that you've done that makes it state of the art, state of the art. Missouri runs workloads with all that stuff. What is it? There's Linux involved, the Java, tease out some of the highlights, the new shiny part of the engine that you've added onto this. Yeah, well, with this particular announcement for the first time, we've focused on core engine performance, again, delivering 40% more capacity, lots more cores, 10 terabytes of memory on this machine. So I mean, we know- That's speech and feeds game, but let's get down dirty on this. Who does that help? Is it a big bank? Minutes matter, right? For some of these guys? Yeah, yeah, yeah. Take with people wanting to put more data, more data in memory. If you give a terabyte of memory to DB2 running on this platform and have it use that stuff for in-memory, more in-memory, you reduce IOs, you improve response time by 70% for those transactions. And those are all core transactional environments that are running, right? Those financial institutions, those banks, those insurance companies. That's hardcore users that you have on that, right? This is like big, and what's the alternative? Say the Z wasn't around. I had to go build out a distributed system, buy some boxes. Yeah, I mean, people, I think generally, believe that in the absence of a mainframe, right? And the value proposition I think you get with a mainframe, that's all about scale out, right? I heard some customers tell me off the record at the Z event, I won't say their names, but Mrs. said they're huge, and they don't mind writing big, fat checks. The stakes are high. They said, we've already bought three, not even looking at it. Site unseen. Site unseen, we got the beta, we're just like, and I go, I go, so you're writing a blank check. Is it R&D? It's cool, you got a good R&D. No, no, no, no, no. He said, minutes matter. Hundreds of millions of dollars matter on a minute window. Minutes window. Is it anything you can squeeze performance? Well, it's not just that, it's availability, right? I mean, the ability to make sure that you have a platform if it's running core critical business, that's always there, always on all the time, right? It's not going to provide resiliency or availability problems. That's key to a lot of our customers. Accessibility too is important. Accessibility. Yeah, but Z13 is not changing my application availability. No, well, improving it. I mean, we continue, like, yes, the technical corp of this is that, take our IO subsystem, right? Every release, including this machine, we make the subsystems that drive IO smarter, more autonomic, right? So we have hardware that can sense congestion in the fabric, congestion in the network, can route around failures, all transparently, right? Multi-pathing that does transparent routing around congestion and failure with no impact to the application whatsoever, right? This is what we focus on. I see, okay, again, that's certainly, you can measure that as application availability. I want to talk more about IO. You just said you focused on compute performance. Then you mentioned innovations in IO, it goes through the whole balance system is the key. I remember when I was a young pup in this business, I lived inside of the systems and technology guides. I would just pour through that stuff. And I remember when IBM announced MBSXA, expanded storage, I said, wow, if IBM could persist that memory extension, it would be totally be a game changer. We were a few decades ahead of our time, I guess, in terms of understanding what technology couldn't do. But now flash comes into play, and you're starting to see the pendulum swing back, that persistence at near memory speeds. So can you talk about flash and where it fits? Yeah, I mean, we got a couple, there's a couple of places we can talk about flash, right? The traditional thought on flash is, you build storage with, you build storage flash in our storage subsystems. And of course, the latency and the performance is extraordinary, right? And so we're even looking at the possibility of synchronous IO and really exploiting like the response times and the latency improvements that flash can give you in the storage devices themselves. But the other emphasis for Z is internal flash. We actually have the ability to put, you know, 6.4 terabytes worth of internal flash in the system, and we use that for internal purposes, right? So in a traditional ZOS system now, you can completely eliminate all of the paging, right? Virtual memory paging to external disk, right? By using internal flash. That gives performance advantage, that gives availability advantage, right? It's huge. So, given the slow nature of spinning disk, you mentioned paging. How much emphasis in the prior to flash did you actually put on paging algorithms? Again, given the latency, and do you have to sort of rethink paging now with that consistent resource? Like I said, I mean there, in the last generation machine is where we introduced this, and one of the primary use cases was paging, right? And so, you know, the performance you get out of that internal flash and the availability is extraordinary. But now what about when I'm running Linux? It's coming. Because I would think that Linux open source industry didn't even worry about paging for a long time. Well, I mean I think there's paging of the guests, right? And then there's paging and memory over commitment and optimizations you can make on the virtual machines themselves, right? So that's our next step, right? Is to make use and really exploit internal flash, right, for the hypervisor itself. Is one of the big benefits to scaling up a virtual environment is effective memory management, working set management, over commitment of memory, and flash will help. So essentially bypassing spinning disk, and as well as the storage protocol. That's correct, and it's all internal. And so it's essentially a memory extension. Yeah, that's the way we view it. That's the way we view it is a natural kind of high performance extension in the memory hierarchy. And it's an atomic right into that memory extension? Yeah. And you're doing that today, or? Yeah, we're doing that today on the last generation of machine, EC12. We introduced the flash for ZOS. It was a ZOS-only environment. And are apps being rewritten to take advantage of that? No. Or new apps? Yeah, you know, a lot of what we do in the underlying middleware or operating system hardware, I think another benefit to the mainframe is that we introduce this technology and wherever possible exploit it transparently to the application. So it's not that some of these capabilities aren't made available to the application, but in the real leverage point is all of a sudden your operating system, your DB2, your WebSphere, that stuff just runs faster and better with no change to the app. We did that with networking, where we introduced, you mentioned high performance computing before. InfiniBand brought us RDMA capabilities for networking. So we've introduced that now on standard converged ethernet, completely transparent to the application. So now any applications that are driving JDBC access or the DB2, for example, standard socket IP transparent to the app, getting huge performance gains because we slide this stuff underneath it and it's all transparent. All right, Jeff, we're getting the hook here. We've got the keynotes going on. Thanks for coming. Appreciate bottom line for the folks out there. What the system Z platform all about, bottom line and a bumper sticker for what Z is about these days. Yeah, I think that the positioning here is with Linux, right, with the advances in the hardware, it's focused on not only providing the traditional value proposition for the traditional workloads, but with Linux, with standardization, with best of read virtualization, with the openness and the standardization to plug into standard cloud environments, right? You take all of that value of economy scale. It's a great cloud platform. They're bringing analytics to the platform with the compute intensive stuff, right? Very important for us. 80% security is solid. We've got advances in our crypto stuff again in this. The crypto is a big deal, I think. In this machine. That's a big deal right here. Two X performance over the last year. Inter-memory analytics. Big memory, right? All right, there it is. It's the big irons, the God box. It's all-knowing, the more work you give it, the harder it pushes, and the better performance. Love that line. This is theCUBE, same here. The more interviews we do, the better we get. So, we're bringing it out to you with the event coverage here, and I'll be able to connect live in my space. We'll be right back at this short break.