 Live from the Frederick P. Rose Hall, home of Jazz at Lincoln Center in New York, New York. It's theCUBE at IBM Z Next, redefining digital business. Brought to you by headline sponsor IBM. Welcome back to Jazz at Lincoln Center in the Big Apple. My name is Dave Vellante. This is theCUBE. theCUBE goes out, we extract the signal from the noise. We're at the IBM Z Systems announcement. This is 13th generation of Z Systems, IBM innovation dating back to the system 360, 50 years ago, 51 years ago almost now. Ray Jones is here. He's the vice president of Z Systems software. We're going to talk about mainframe economics, Z economics rate. Thanks very much for coming to theCUBE. Thank you. So we must be excited. Big day for you guys. All your big clients are here. You got the analysts are in. Got some terrific press this morning at the analyst event. You were talking about pricing. You're talking about economics. What's your angle on all this? Well, a couple of angles. First, as we integrate this platform more and more, we drive an accelerated rate and pace of price performance for strategic customer workloads. Things like cloud, analytics, mobile. We underpin it with superior security, the most secure server on the planet. And then from a pricing perspective, we've worked very closely with customers over the last few months to deliver substantial flexibility and significant simplicity over previous structures. So it's all about competitive price performance, better pricing, more flexible pricing, simpler pricing. Okay, so can you be more specific? What exactly has changed? Sure. So from a pricing perspective, we have basically eliminated a lot of the rules that have grown up over time that define what is a cysplex, what is a new workload, and then how we then measure and price those new workloads. It used to be that we would ask customers to confine new workloads in existing or new logical partitions. And now, we literally price by the address space. So if they want to bring in a new workload, they want to run it anywhere they choose on the platform, they get aggressive pricing, and they don't have to make any changes to their existing infrastructure to get it. And they only pay for what they use and no more. So you're saying that in the old structure, you had to choose up front which L-pars with the workload would be allocated to, and that was a fixed sort of block of infrastructure that you were restricted to, and if you wanted to make a change, you would potentially charge for it, you would charge for it. Yes. Okay, and to avoid that, I would have to do gymnastics. Exactly. Yeah, people must have hated that. They hated it, and the good news is that they were very clear about what they wanted in a real, true, virtualized pricing system, and we've basically agreed and are announcing today everything that they told us that they want. Okay, so tell me again, you're talking about a virtual pool of resource that I can then allocate a workload to. If that workload moves, my pricing doesn't change. So you're essentially pricing by workload? We're literally pricing by workload, and we're pricing for the true and only capacity that the customer uses, and if that capacity changes month by month, the price will change and float with the workload. So if the workload gets bigger or smaller, it's a true floating mechanism, but we only charge for what they consume, and we no longer require specific software or hardware configurations in order to do it. So you're talking about MIPS consumption? Yeah. Okay, now, so let's take an example. So is Z-System software, is that, so that excludes the database, correct? That's everything. It excludes the database. Okay, so did you use to price database by core? I mean, that was a common practice, right? Well, we would price it by the capacity, by the MIPS, but we would limit the ability of the customer to deploy the software to either specific configurations, hardware-wise, or specific logical partitions. So if you wanted to put a new database workload into a logical partition, you would pay for the entire partition. Whatever else was in there would also absorb an uplift for that new workload. I see. So we've eliminated all of those rules. So you'd never price by core for the database? No, we've always priced by MIPS, or the aggregation of MIPS that we call MSUs, but in fact, we would price by logical partition, right? Which could be greater than the MIPS consumed by the workload. And so by being much more granular, we can now give the customer true line of sight to the application economics, and only the application. Any other impact is erased or subtracted, and only the price for that new workload or subsystem is what they see the charge for. So it's elastic pricing, essentially. One would think about it as cloud pricing. Is that right? It's very elastic, right? And the key really is that in order to get the benefit of it, the customer does not have to make any changes to their infrastructure, and they only pay for what they use. So what's going to happen to customers' bills as we look forward? I mean, you must have done the predictive analytics to... Right. And how can they anticipate increases or decreases? Well, so this pricing is a new design For Z13. For new workloads, right? And going forward, as we've tested the pricing against other Intel or distributed core models, we're finding that it is extremely competitive. As an example, for mobile workloads, with the Z13, the cost of doing mobile on Z, as opposed to doing it on other platforms, other servers, is 40% less... 4-0. 4-0% less. The cost of doing analytics on a mainframe versus how people do it today, which tends to be off the platform, is now literally one-tenth of the cost of other platforms. One-tenth. So the combination of the pricing, the flexibility, and the underlying price performance that we've built into the software and the way that it exploits the Z13 is a powerful change in the pricing and the economics that makes this platform incredibly competitive. So for a competitive standpoint, you're banking on the efficiency of the system and its design and its integrated nature. Exactly. But so let's compare it to sort of a... Let's say I'm doing some kind of x86, I'm running an Oracle database, they're going to price by core, typically. And so comparing sort of a traditional Unix or Linux workload to what you're doing, you're saying it's... Tell me the numbers again. You said 40% less expensive. And that's because of the efficiency of your system and essentially the inefficiency of the competitor's pricing model. Is that a fair assessment? It's the efficiency and the flexibility of the hardware, the software, and the new pricing as it relates to the way in which distributed systems run and are priced, right? So we've made radical changes to the hardware, the software, and the pricing in order to dramatically lower the bar and become flat out very competitive for strategic workloads like CAMS, Cloud Analytics, Mobile Secure Workloads. And that ripples through maintenance obviously, right? It does. They rule of thumb, right? 15 to 20, it's 15, 18% of the license cost is going to be maintenance cost. So if I lower my license cost this month, am I paying monthly? You can, in addition to paying monthly, you now have options to pay on a one-time charge off of the same metric with the same conditions that I've referenced. So if I feel like I'm going to be a big consumer and it's going to be more expensive for me to go monthly, I could just say I want a one-time charge but an all-in upfront perpetual license. Yep, so that's another aspect of the flexibility that we're introducing into the structure. You can pay only for what you use as you want, how you want it. Okay, so, and then in thinking about sort of the, you've referenced a couple of times the nature of the distributed computing model, I want to pull out of you, what's inefficient about that model? Is it the way they price by core? Is it just the nature of the inefficiency of the system? Are we talking, we're not talking TCO here, or are we? We are, well, we're talking about TCO, total cost of ownership, but we're also talking about total cost acquisition, which is hardware, software, maintenance only. So let's talk about the hardware, software maintenance only piece. Are you suggesting, you are suggesting that your model is more efficient and more flexible than the distributed model? I want to double click on that a little bit. Yes. Why, you know, compare it to that distributed model? I mean, I get what you guys are doing. What's the gotcha in the distributed model? Well, first, the distributed model from a technology perspective is struggling to continue to introduce accelerated rates of price performance. And that's a function of the technology. We've been able to overcome that with system Z by the way in which we integrate the hardware, the software, and the pricing. In addition, the distributed model prices by the core. And what we see is that workloads running on distributed systems tend to go through what we call core inflation. In order to physically get the throughput, the security, the backup, the recovery, the sheer number of cores that's required to emulate the kind of thing that we get on a mainframe becomes huge. Core creep, we call it. Right, you got it, right? So, in effect, we've eliminated core creep in the way that we build the technology, and we've also eliminated capacity creep, the equivalent of core creep, in the way that we introduce flexibility into the structure for pricing. Last question. What's this do to your revenue model? Aren't you concerned? Or are you betting on elasticity that you drop the price's demand will go up? We believe, and customers are already responding with growing their workloads, putting new workloads on the platform that were never there. So, you're right. This is all about being flat out competitive for new workloads so that the growth outstrips the give, as it were. Resurgence in mainframe, cloud, mobile, analytics, security, Ray, thanks very much for coming to theCUBE. Really appreciate your time. Okay. All right. Keep right there, buddy. We'll be back with the general session. We'll be broadcasting that live, and then John and I will be back with more guests and to wrap up today. Keep right there. You're watching theCUBE. We're live from the Big Apple.