 from Las Vegas, it's theCUBE. Covering IBM Think 2018, brought to you by IBM. We're back at Mandalay Bay in Las Vegas. This is IBM Think 2018. This is day three of theCUBE's wall-to-wall coverage. My name is Dave Vellante, I'm here with Peter Burris. You're watching theCUBE, the leader in live tech coverage. Daniel Hernandez is here. He's the vice president of IBM Analytics. Cube alum, great to see you again, Daniel. Thanks for coming back on. Happy to be here. Big tent show, consolidating a bunch of shows. You guys used to have your own analytic show, but now you've got all the clients here. How do you like it? A compare and contrast? IBM Analytics left this year. So having all our clients one place, I actually like it. We're going to work out some of the kinks a little bit, but I think one show where you can have a conversation around artificial intelligence, data, analytics, power systems is beneficial in all of us, actually. Well, in many respects, the whole industry is munging together as folks focus more on workloads as opposed to technology or even roles. So having an event like this where folks can talk about what they're trying to do, the workloads you're trying to create, the role that analytics, AI, et cetera is going to play in informing those workloads, not a bad place to get that cross-pollination. What do you think? Totally, I mean, so you talk to a client, they're solving problems. Problems are a combination of stuff that we have to offer in analytics, stuff that our friends in hybrid integration have to offer. I mean, for me, logistically, I could say, oh, Mike Gilfix, business process automation, go talk to him and he's here. That's happened probably at least a dozen times so far and not even two days, so. All right, so I got to ask you a tagline, making data ready for AI. Yeah. All right, what does that mean? So we get excited about amazing tech. Artificial intelligence is amazing technology. So I remember when Watson, the Jeopardy, just being inspired by all the things that I thought it could do to solve problems that matter to me. And if you look over the last many years, virtual assistants, image recognition systems that solve pretty big problems like catching bad guys are inspirational pieces of work that were inspired a lot by what we did then. And in business, it's triggered a wave of artificial intelligence can help me solve business critical issues. And I will tell you that many clients simply aren't ready to get started and because they're not ready, they're going to fail. And so our attitude about things are, through IBM Analytics, we're going to deliver the critical capabilities you need to be ready for AI. And if you don't have that, 100% of your projects will fail. But how do you get the business ready to think about data differently? So you can do a lot to say, the technology you need to do this looks differently, but you also need to get the organization to acculturate and appreciate that their business is going to run differently as a consequence of data and what you do with it. How do you get the business to start making adjustments? I think you said the magic word, the business, which is to say, at least all the conversations I have with my customers, they can't even tell that I'm from analytics because I'm asking them about the problems. What are you trying to do? How would you measure success? What are the critical issues that you're trying to solve? Are you trying to make money, save money? Those kinds of things. And by focusing on it, we can advise them then on, based on that, how we can help. And so the data culture that you're describing, I think it's an effect. Like you become data aware and understand the power of it by doing, you do by starting with problems, developing successes and iterating. An approach to solving problems. So that's kind of a step zero to get to then getting data ready for AI. Right, I mean, but in no conversation that leads to success, does it ever start with, we're going to do AI or machine learning, what problem are we going to solve at the same time? Always the other way around. And when we do that, our technology then is easily explainable. It's like, okay, you want to build a system for better customer interactions in your call center. Well, what does that mean? Well, you need data about how they've interacted with you, products they've interacted with you. You might want predictions that anticipate what their needs are before they're telling you. And so we can systematically address them through capabilities we've got. Dave, if I can amplify one thing, it makes the technology easier when you put it based on success. I think it's a really crucial important. Yeah, I mean, it's super simple. All of us that have the habit, if we're in technology of going the other way around, my stuff is cool. Here's why it's cool. What problems can you solve? Not helpful for most of our clients. I wonder if we could comment on this, Daniel. I mean, I feel like we're last 10 years about cloud, mobile, social, big data. We seem to be entering an era now of sense, speak, act, optimize, see, learn. This sort of pervasive AI, if you will. How is that a reasonable notion that we're entering that era? And what do you see clients doing to take advantage of that? What's their mindset like when you talk to them? I think the evidence is there. I mean, you just got to look around the show and see what's possible technically. The Watson team has been doing quite a bit of stuff around speech, around image. It's fascinating tech. I mean, stuff that feels magical to me. And I know how the stuff works and it still feels kind of fascinating. Now, the question is, how do you apply that to solve problems? The, I think it's only a matter of time where most companies are implementing artificial intelligence systems in business critical and core parts of their processes. And they're going to get there by starting, by doing what they're already doing now with us. And that is, what problem am I solving? What data do I need to get that done? How do I control and organize that information so I can exploit it? How can I exploit machine learning and deep learning and other tech, all these other technologies to then solve that problem? How do I measure success? How do I track that? And just systematically running these experiments. And I think that crescendos to a critical mass of... I want to ask you a question. You're a technologist and you said it's amazing. It's like magic even to you. Yeah. Imagine, not technologists, but it's like to me, but there's a black box component of AI. And maybe that's okay. I'm just wondering if that's, is that a headwind? Are clients comfortable with that? I mean, if you have to describe like how you really know it's a cat. I mean, I know a cat when I see it. And the machine can tell me it's a cat or not a hot dog, so a valley reference. But to tell me actually how it works, to figure that out, there's a black box component. Does that scare people or are they okay with that? You're probably giving me too much credit, so I really can't explain how this works. But what I can tell you is how, certainly, let's take regulated industries like banks and insurance companies that are building machine learning models throughout their enterprise. And they've got to explain to a regulator that they are offering considerations around anti-discriminatory, you know, basically they're not by systems that cause them to do things that are against the law effectively. So what are they doing? Well, they're using tools like once from IBM to build these models, to track the process of creating these models, which include what data they used, how that training was done, prove that the inputs and outputs are not anti-discriminatory and actually go through their own internal general counsel and regulators to get it done. So whether you could explain the model in this particular case doesn't matter. What they're trying to prove is that the effect is not violating the law, which the tool sets and process around those tool sets allow you to get that done today. Well, let me build on that because one of the ways that it does work is that, as Ginny said yesterday, Ginny Rometti said yesterday, that it's always going to be a machine human component to it. And so the way it typically works is the machine says, I think this is a cat and the human validates it or not. The machine still really doesn't know if it's a cat. But coming back to this point, one of the key things that we see anyway and one of the advantages that IBM likely has is today the folks running operational systems, the core of the business, trust their data sources. Do they? They trust their DB2 database, they trust their Oracle database, they trust the data that's in the application. Trust the data that's in their data lake? Yeah, I'm not saying they do. That's the key question. At what point in time, I think a real important part of your question is, at what point in time do the hardcore people allow AI to provide a critical input that's going to significantly or potentially dramatically change the behavior of the core operational systems? That seems a really crucial point. What kind of feedback do you get from customers as you talk about turning AI from something that has an insight every now and then to becoming effectively an element or essential to the operation of the business? One of the critical issues in getting, especially machine learning models integrated in business critical processes and workflows is getting those models running where that work is done. So if you look, I mean, when I was here last time, I was talking about the, we were focused on portfolio simplification of bringing machine learning where the data was. You know, we brought machine learning to private cloud, we brought it onto Hadoop, we brought it on mainframe. I think it's a critical necessary ingredient that you need to deliver that outcome, like bring that technology where the data is, otherwise it just won't work. Why? As soon as you move, you got latency. As soon as you move, you got data quality issues they're going to have contending that's going to exacerbate whatever mistrust you might have. Oh, this stuff's not cheap to move, it's not cheap to ingest? Yeah, I mean, it's, by the way, the machine learning on ZE offering that we launched last year in March, April was one of our highest most successful offerings last year. So let's talk about some of the offerings. I mean, the day you're in the business selling stuff. You talked about machine learning on whatever, ZE, X, you know, whatever platform. Cloud Private, I know you've got perspectives on that DB2 event store, something that you're obviously familiar with, SPSS, part of the portfolio. So give us the update on some of these products. So making data ready for AI requires a design principle around simplicity. We launched in January three core offerings that help clients benefit from the capability that we deliver to capture data, to organize and control that data and analyze that data. So we delivered a hybrid data management offering, which gives you everything you need to collect data, zankered by DB2. We have the unified governance and integration portfolio that gives you everything you need to organize and control that data, zankered by our information server product set. And we've got our data science and business analytics portfolio, which is anchored by our data science, experience, SPSS, and cognitive analytics portfolio. So clients that wanna mix and match those capabilities in support of artificial intelligence systems or otherwise can benefit from that easily. We just announced here a radical, an even radical step forward in simplification, which we thought that already was. So if you all want to move to the public cloud but can't, don't want to move to the public cloud for whatever reason. And we think by the way, public cloud for workloads, like you should try to run as much as you can there because the benefits are, but for whatever reason you can't, we need to deliver those benefits behind the firewall where those workloads are. So last year, the hybrid integration team led by Dennis Keneally introduced a IBM Cloud Private offering. It's basically IaaS and application paths behind the firewall. It's like, you know, run on a Kubernetes environment, your applications do build outs, do migrations of existing workloads to it. What we did with IBM Cloud Private for data is have the data companion for that. IBM Cloud Private was a runaway success for us. You can imagine the data companion to that just being like, what application doesn't need data? It's peanut butter and jelly for us. Last question. You had another point. Sorry, I want to talk about DB2 and SPS. Oh yeah, let's go there, yeah, yeah. DB2 EventStore, I forget if anyone, it has a hundred times performance improvement on ingest relative to the current state of the art. So you say, why does that matter? If you do an analysis or analytics, machine learning, artificial intelligence, you're only as good as whatever data you have captured of your, whatever reality is, current state of art databases don't allow you to capture everything you would want. So DB2 EventStore with that ingest lets you capture more than what you could ever imagine you would want. 250 billion events per year is basically what it's rated at. So we think that's a massive improvement in database technology. And it happens to be based on open source so the programming model is something that developers feel is familiar. SPSS is celebrating its 50th year anniversary. It's the number one digital offering instead of IBM. It had 510,000 users trying it out last year. We just renovated the user experience and made it even more simple on stats. We're doing the same thing on Modeler and we're bringing SPSS and our data science experience together so that there was one tool chain for data science end to end in the private cloud. It's pretty phenomenal so. Okay great, appreciate you running down the portfolio for us. Last question, kind of a get out your telescope. Yep. When you talk to clients, when you think about technology, as from a technology technologist's perspective, how far can we take machine intelligence? Think 20 plus years. How far can we take it? How far should we take it? Can they ever really know where the cat is? I don't know the answer to that question, to be honest. Are people asking that question in the client base or they're figuring out how do I apply it today? Certainly they're not asking me, it's probably because I'm not the smartest guy in the room. They're probably asking me. Elon Musk is talking about it, Stephen Hawking was talking about it. I think it's so hard to anticipate. I think where we are today is magical and I couldn't have anticipated it seven years ago to be honest. So I can't imagine. It's really hard to predict, isn't it? Yeah, I've been wrong on three to four year horizons. I can't do 20 realistically. So I'm sorry to disappoint you. No, that's okay because it leads to the, my real last question is okay, well what kinds of things can machines do that humans can't? And I mean, you don't even have to answer this, but I want to put it out there to the audience and think about how are they going to compliment each other? How are they going to compete with each other? I mean, these are some of the big questions that I think society is asking. And IBM has some answers, but we're going to apply it here, here and here. You guys are clear about augmented intelligence, not replacing, but they're big questions that I think we want to get out there and have people ponder. I don't know if you have a comment. I do. I think there are not obvious things to human beings. Relationships between data that's expressing some part of your reality that a machine through machine learning can see that we can't. Now, what does it mean? Do you take action on it? Is it simply an observation? It's something that a human being can do. So I think that combination is something that companies can take advantage of today. I mean, those not obviously obvious relationships inside of your data, not obvious insights inside of your data is what machines can get done now. It's how machine learning is being used today. Is it going to be able to reason on what to do about it? Not yet. So you still need human beings in the middle to, especially do with consequential decisions. Nonetheless, I think the impact on industry is going to be significant. Other questions we ask are, are retail stores going to be the exception versus the normal? Banks lose control of the payment systems. Will cyber be the future of warfare, et cetera, et cetera? These are really interesting questions that we're trying to cover in theCUBE. We appreciate you helping us explore those, Daniel. It's always great to see you. Thank you, Dan. Thank you, Peter. All right, keep it right there, buddy. We'll be back with our next guest right after this short break.