 Live from Las Vegas, it's theCUBE. Covering IBM Think 2018, brought to you by IBM. Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage. My name is Dave Vellante, and this is our third day of wall-to-wall coverage of IBM Think. Dinesh Nirmala series, the vice president of analytics development at IBM. Dinesh, great to see you again. We just saw each other a couple of weeks ago. I know, in New York. And, yeah, and of course in Big Data SV. Right. The Strata conference, so great to see you again. Well, thank you. A little different venue here. We had real intimate in New York City and in San Jose. I know. Massive. What are your thoughts on bringing all the clients together like this? I mean, it's great, because we have combined all the conferences into one, which obviously helps, because the messaging is very clear to our clients on what we are doing end to end, but the feedback has been tremendous. I mean, very positive. What has the feedback been like in terms of how you guys are making progress in the analytics group? What are they like? What are they asking you for more of? Right. So on the analytics side, the data is growing by petabytes a day. And the question is like, how do they create insights into this massive amount of data that they have in their premise or on cloud? So we have been working to make sure that how can we build the tools to enable our customers to create insights, whether the data is on private cloud, public or hybrid. And that's a very unique value proposition that we bring to our customers, where regardless of where your data is, we can help you, whether it's cloud, private or hybrid. Well, so we're living in this multi-petabyte world now, like overnight, it became multi-petabyte. And one of the challenges, of course, people have is not only how do you deal with that volume of data, but how do I act on it and get insights quickly? How do I operationalize it? So maybe you could talk about some of the challenges of operationalizing data. Right, so when I look at machine learning, there is three Ds, I always say, and the first D is the data, the development of the models and the deployment of the model. When I talk about operationalization, especially the deployment piece is the one that gets the most challenging for our enterprise customers. Once you clean the data and you build a model, how do you take that model and you bring it into your existing infrastructure? I mean, look at the large enterprises, right? I mean, you know, they've been around for decades, so they have third-party software, they have existing infrastructure, they have legacy systems. Zillion data marts and data warehouses. So into all of that, how do you infuse machine learning becomes very challenging. I met with the CTO of a major bank a few months ago and his statement kind of stands out to me where he said, Dinesh, it only took us three weeks to build a model. There's been 11 months, we still haven't deployed it. So that's the challenge our customers face and that's where we bring in the skill set, not just the tools, but we bring the skills to enable and bring that into production. So is that the challenge, it's the skill sets or is it the organizational inertia around, well, I don't have time to do that now because I got to get this report out or maybe you could talk about that a little bit. Right, so that is always there, right? I mean, because once a priority is set, obviously the different challenges pulled you in different directions. So every organization faces that to a large extent, but I think if you take from a pure technical perspective, I would say the challenge is two things, getting the right tools, getting the right skills. So with IBM what we are focusing is that how do we bring the right tools regardless of the form factor you have, whether cloud, private cloud, hybrid cloud, and then how do we bring the right skills into it? So this week we announced the data science elite team who can come in and help you with building models, looking at the use cases, should we be using vanilla machine learning or should we be using deep learning? All those things and how do we bring that model into the production environment itself? So I would say tools and skills. So skills wise, so in the skills there's two, at least two passes, it's like the multi-tool athlete. You've got the understanding of the tech, you know the tools and most technology people say, I'll figure that out. But then there's this data and digital skills. It's like this double deep skills that is challenging. So you're saying you can help sort of kickstart that. And how does that work? That sort of a services engagement, that's part of the... So once you identify a use case, the data science elite team can come in because they have some level of vertical knowledge of your industry. They are very trained data scientists so they can come assess the use case, help you pick the right algorithms to build it and then help you deploy, cleanse the data. I mean you bring up a very, very good point. I mean let's just look at the data, right? The personas that's involved in data, there is the data engineer, there's the data scientist, there's the data worker, there's the data steward, there's the CDO. So that's just the data piece, right? I mean there's so many personas that has to come together. And that's why I said the skills is a very critical piece of all of it. But also working together, the collaboration is important. All right, tell us more about IBM Cloud Private for data. We've heard of IBM Cloud Private. Cloud Private for data is new, what's that all about? Right, so we announced IBM Cloud Private for data this week and let me tell you Dave, this has been the most significant announcement from an analytic perspective probably in a while that we are getting such a positive response. And I'll tell you why. So when you look at the platform, our customers want three things. One, they want to be able to build on top of the platform. They want it to be open and they want it to be extensible. And we have all three available. The platform is built on Kubernetes, so it's completely open, it's scalable, it's elastic, all those features comes with it. And then we put the end to end so you can ingest the data, you can cleanse it or transform it, you can build models or do deep analytics on it, you can visualize it. So you can do everything on the platform. So I'll take an example like blockchain, for example. I mean you have, if I were to simplify it, right, you have the ledger where you are obviously putting your transactions in and then you have a state database that where you're putting your latest transactions in. The ledger is unstructured. So how do you ask that is getting filled? How do you ingest that, transform it on the fly, and be able to write into a persistent place and do analytics on it. Only a platform can do with that kind of volume of data and that's where the data platform brings in, which is very unique, especially on the modern applications that you want to do. Yes, because if you don't have the platform, let's unpack this a little bit. You've got a series of bespoke products and then you've got just a lot of latency in terms of the elapsed time to get to the insights. Along the way, you've got data consistency issues, data quality, maybe it's variable, things change. I mean think about it, right? If you don't have the platform, then you have siloed products. So all of a sudden you got to get a product for your governance, your integration catalog, you need to get a product for ingest, you got to get a product for persistence, you got to get a product for analytics, you got to get a product for visualization, and then you add the complexity of the different personas working together between the multitude of products. You have a mess in your hand at that point. The platform solves that problem because it brings you an integrated end-to-end solution that you can use to build, I mean for example, blockchain in this case. Okay, I've asked you this before, but I got to ask you again and get it on record. Thank you. So a lot of people will hear that and say okay, but it's a bunch of bespoke products that IBM has taken, they put a UI layer on top and called it a platform. So what defines a platform and how have you not done that? It actually created the platform. So we are taking the functionality of the existing products and that's what differentiates us, right? I mean if you look at our governance portfolio, I can sit here and very confidently say no one can match that. So we obviously have that strength. Right, real depth that we can bring. So we are bringing the functionality, but what we have done is like we have taken the existing products and disintegrated into microservices so we can make it cloud native. So that is a huge step for us, right? And then once you make that containerized and microservices, it fits into the open platform that we talked about before and now you have an end-to-end well orchestrated pipeline that's available in the platform that can scale and be elastic as needed. So it's not that we are bringing the products, we are bringing the functionality of it. But I want to keep on this for a second. So the experience for the user is different if you microservice what you say because if you just did what I said to put a layer, UI layer on top, you would be going into these stove pipes and then call the sack and then coming back. Coming back. So the development effort for that must have been fairly massive. I mean you could have done the UI layer in months. Right, right, right. Then it is not really a cloud native way of doing it, right? I mean if you're just changing the UI and the experience that's completely different. What we have done is that we have completely re-architected the underlying product suite to meet the experience and the underlying platform layer. So how long did this take? What kind of resources did you have to throw at this from a development standpoint? So this has been in development for 12 to 18 months and we put a tremendous amount of resources to make this happen. I mean fortunately in our case we have the depth, we have the functionality. So it was about translating that into the cloud native way of doing the app development. And did you approach this with sort of multiple small teams or was there a larger team? What was your philosophy? It was multiple of small teams, right? I mean so if you look at our governance portfolio we got to take our governance catalog, rewrite that code. If you look at our master data management portfolio we got to take. So it's multiple of small teams with very core focus. I mean I ask you these questions because I think it adds credibility to the claims that you're making about. We have a platform, not a series of bespoke products. Right, and we demoed it. Actually tomorrow at 11 I'm going to deep dive into the architecture of the whole platform itself, how we built it, what are the components we used and I'm going to demo it. So the code is up and running and we are going to put it out there in 2Q for everybody to go use it. At Mandalay Bay you're just demoing that or where is that demo? It's in Mandalay Bay. So we have a session at 11.30. Talk more about machine learning and how you've infused machine learning into the portfolio. Right, so every part of our product portfolio has machine learning. So I'll take two examples. One is DB2. So today DB2 optimizer is a cost based optimizer. We have taken the optimizer and infused machine learning into it to say based on the query that's coming in, take the right access path, predict the right access path and take it. That has been such a great experience because we are seeing 30 to 50% performance improvement in most of the queries that we run through the machine learning. So that's one. The other one is the classification. So let's say you have a business term and you want to classify. So if you have a zip code, we can use in our catalog to say, there's an 80% chance this particular number is a zip code. And then it can learn over time. If you tell it, no, that's not a zip code. That's a postcode in Canada. So the next time you put that in there, it has learned. So every product we have infused machine learning and that's our goal is to become completely a cognitive platform pretty soon. I mean, you know, that has also been a tremendous piece of work that we've been doing. So what can we expect? I mean, you guys are moving fast. We've seen you go from sort of a bespoke product, company division to this platform division, injecting now machine learning into the equation. You're bringing in new technologies like blockchain, which you're able to do because you have a platform. What should we expect in terms of the pace and the types of innovations that we can see going forward? What can you share with us without divulging secrets? So from a product perspective, we want to infuse cognitive machine learning into every aspect of the product. So we don't want to, we don't want our customers calling us telling there's a problem. We want to be able to tell our customer a day or two hours ahead that there is a problem. So that is predictability, right? So we want not just in the product, even in the services side, we want to infuse total machine learning into the product. From a platform perspective, we want to make it completely open, extensible so our partners can come and build on top of it. So every customer can take advantage of Vertigo and other solutions that they build. So you get a platform, you're getting this flywheel effect, inject machine learning everywhere, open API so you can bring in new technologies like blockchain as they evolve. Dinesh, thanks very much for coming on theCUBE. Always great to have you. All right, keep it right there, buddy, we'll be back with our next guest. This is theCUBE live from IBM Think 2018. We'll be right back.