 Live from the Frederick P. Rose Hall, home of Jazz at Lincoln Center in New York, New York. It's theCUBE at IBM Z Next, redefining digital business. Brought to you by headline sponsor, IBM. Cube after dark at Lincoln Center, Jazz at Lincoln Center at the IBM Z event. IBM knows how to throw a party. I'm here with John Tuigo. Consultant extraordinaire, John. Thanks for coming on theCUBE. What'd you think of the day today? Actually, one of IBM's better events, well planned, well executed, obviously very entertaining and good after hours entertainment as well. I thought the content of the show was top notch, a lot of high value content, but the obvious little gaps here and there in the detail that maybe I was looking for, but for the most part, they got their message through, they got the points out. The Z13 looks like a winner. Yeah, so one of the big messages was obviously bringing analytic workloads and transaction workloads together. How real is that in the clients that you work with? I get the impression that it is certainly something all my clients are very interested in, and I think that they may have actually solved that particular night. The only thing that concerns me is that they're, well, I have basically three issues, but the one related to that that concerns me is that there is this stated intention to be able to combine Hadoop infrastructure and other distributed infrastructure with the mainframe, which now has these capabilities built right into the analytics within the transaction. I have yet to hear how that's going to be accomplished to my satisfaction. I know that they have every bus extension known to man, they could certainly connect to a clustered nodal configuration of Hadoop. The question is whether that really delivers the throughput that you would expect, the delivery of the data over to the mainframe so that you would be able to perform analytics in a timely way, or are they doing some pre-processing out on the Hadoop infrastructure and then sending the result over? I don't know that, and I don't think it's been clearly defined or clearly explained. Somebody probably knows within IBM, but they didn't share it with us today. As the devil's always in the details as they say, so what was missing from your standpoint? Well, the first thing globally is I'm a storage geek, so I wanted to see more about what the relationship is between all the high-performance computing that they've now built in, the faster processors, the new Zips and Zapp processors, the some of the other configuration details that they were noting and touting, how they actually make that work with respect to the underlying data, the stored data. Also, they've got a whole diagram that they showed in one of the sessions today on data lifecycle management, which is a big bugaboo of mine. I mean, I think that most of our infrastructure suffers from a lack of management of the underlying hardware infrastructure and a lack of management of the data that we're putting on that infrastructure. So, when I looked at that chart, I was looking for the best, okay? I was looking for something that would blow the old EMC nonsense out of the water when they were trying to sell information lifecycle management about 10 years ago. I didn't see it. What I saw were basic steps involved in getting data analyzed and getting the results back to an application so they could push them out to a mobile device. But I didn't see anything about archive. I didn't see anything about data protection. I didn't see any of those steps that one would normally associate with data lifecycle management. End of life, disposal, marking the data in some way so it can be gotten rid of after a time. I know that sounds like a dirty word, but every hospital I go into, they're hoping that their infrastructure will even stand the test of time required by HIPAA to hold onto the damn data so they can get rid of it just as soon as they can after it expires. Anyway, there was no mention of that. That's another kind of a gap in the story that I'd like to see. I recently wrote about a strategy that some of my clients are starting to explore called Archive in Place. When you're doing some of these behind-the-server nodal clustering of storage, the way that VMware wants to do a virtual SAN. Microsoft is doing it with its clustered storage spaces. What you're doing is replicating data on multiple nodes, and if one node fails, you simply replace the hardware. That's the story. So you have high availability infrastructure. You also have a 300 to 650% increase in storage capacity demand that you're going to have to deal with somewhere because you've got a duplicate set of data behind every node. How is this going to work? If you're doing L-PARS and you're replicating, you're going to do dual redundant Z-13s. That's what one guy set up on stage that his bank, or I think it was the Netherlands guy, said his bank requires always-on service, so they replicate everything to a second installation of their Z-mainframe. That's fine. Does that mean we're replicating all the data? Are we using a shared complex like a SAN behind it? These are questions that really need to be worked out because, as you know, there are a lot of ways that latency can interfere with these performance numbers that they're playing with. You have storage latency. You kind of IO blending, that blender effect that everybody talks about in the x86 world. You can have that just as readily if you're streaming together. Multiple streams are coming out of L-PARS. The mainframe historically has dealt with that, but this is new territory workload-wise. The question is, maybe the L-PARS and the mainframe can be tied together in a coherent way. Maybe they could do log structuring with GPFS, I don't know, but how are they going to integrate the data that's coming in off-distributed infrastructure that they're tying together with the mainframe? That's where the big stumbling block here is. You remember a couple of years ago, they articulated a strategy called Z Enterprise, where they were taking a blade server and they were storing all the recalcitrant workload, the VMware workload, and the Microsoft workload on blades, and they were integrating that back with a Z10 mainframe, okay? The Z10 mainframe was doing whatever it was doing with its Unix and Linux workload, and then it was basically, it had an interconnect over to that blade server, which was managed terribly by SNMP instead of REST, and I called them on that. To my understanding, they didn't go very far with that. There weren't a lot of customers who adopted it. Now, what they're saying makes a hell of a lot more sense to me. They've gone to bed with KVM, and you can do KVM workload, 8,000 KVM workloads inside the basic kit, or you can do the standard L-PAR, MVS, and Z-Linux, or ZOS type virtualization. That brings a lot of the workload that used to be on X86 into your environment. Now, what are you going to do with the recalcitrant workload, the stuff that, like the Microsoft is notorious for making non-standard resource calls? Usually that's the reason why they try to segregate it and put it off somewhere else. There are emulators that you can run in an L-PAR environment, but I never liked an emulator as much as I liked the real thing, right? So anyway, at the end of the day, what you've got is a story that is visionary, but it's not fully baked, or at least they didn't articulate all the nuances of it. I wholly expect that those nuances will come out. I do know that they got called on a few years ago when they announced the, you know, doing virtualization on a mainframe platform. Some of their competitors called them out and said, you know, how many workloads can you actually virtualize? And there was some discussion about that. What are the characteristics of those workloads? If they're file servers, you know, who cares? But if it's a more, I don't know, complex program, it's a little more difficult to implement in a virtual machine in an L-PAR. So the question at the end of the day was, how much of this was real and how much of it was bogus? And they got called into so many debates on that online in the blogosphere and everything else. I think they just kind of dropped the subject and quit talking about it, because they were originally touting on Z10 that a virtual machine cost $100 to stand up in an L-PAR. You know, if you think about that, versus $25,000 for a vSphere license. Get somebody's attention. What do you think of the new pricing? You know, I don't track pricing as closely as I probably should. It all sounded very much like we're going to make some of the pricing work a little better. You know, you've always had in any virtual environment the problem of how many instances you have and do you count the number of times it's loaded into memory? Do you count the number of users? How do you actually measure it? They've tried to measure it on a transactional basis. Now they're going to simplify that a little bit. And I got the impression that they're also going to try to simplify the licensing of some of the accelerators that they've got out there as well. I'll wait and see that in the wash. I'm not as concerned about that. I'm more concerned about whether the architecture itself and the strategy itself would work. All right, John, we got to leave it there. Thanks very much for coming on. My pleasure, thanks for inviting me. All right, pleasure. All right, keep right there, everybody. Back with our next guest right after this.