 Live from Las Vegas, it's theCUBE, covering IBM Think 2018, brought to you by IBM. Hello everyone, welcome to theCUBE here at IBM Think 2018. It's our flagship program where we extract the synth and noise live entertainment and technology coverage here. Of course, we're going to get all the data as well. Interpol Bhandari, global chief data officer for IBM is here in theCUBE, CUBE alumni. The chief of the data for the entire company, your job is pretty secure right now. Jeannie Remy is talking about data as the center of the value proposition, blockchain and AI. We, Dave and I, call it the innovation sandwich. You got job security right now. I guess you could put it that way. So obviously the data, all kidding aside, we've talked before in theCUBE, the importance of data. You know, we're data driven, we're data geeks. This is a wonderful time to be in this world because the disruptive enabling that's going on with data has really been, I think, underplayed. It's been more of a tech conversation, but the business benefits that this enables. I mean, just blockchain alone, what that could do for efficiencies in rewiring the value chains in a decentralized environment. And then what AI promises with the use of data to automate value creation. This is pretty spectacular. No, I would completely agree with you. I think it's a very exciting time to be in our industry. And John, I think the challenge though is what does it mean for the enterprise? If you put yourselves in the shoes of our customers, they're trying to understand what does this really mean for the enterprise? From an, what's an AI enterprise? What are the use cases for blockchain that play in the enterprise? And that's one of the major foci that I have within my organization. You know, my role within IBM as the global chief data officer is to create an AI enterprise within IBM itself and then use that as a showcase for our customers so they're able to understand clearly what the use cases are that make a lot of sense. Because frankly, IBM looks a lot like some of our customers. We are a large enterprise. We've been around for a while and that fits the profile for the large customers that we serve. Well, IBM's a perfect melting pot and Petri dish, if you will, to look at the future, because you have legacy. There's legacy, you know, hundreds of years of being in business and that, you know, so you've been around but you're also pushing the latest technologies. How has IBM been using the tech? And you give an example because you know, this is the digital transformation challenge that most existing leaders have. You know, you don't need to be only five years old just to be kind of an old relic compared to what's on the table right now. The speed of innovation. So there has to be a constant energy on understanding how to create sustainable tech and business models and have that regenerate self-healing. I mean, this is a new normal that is just hitting us. How do you guys do it? Can you give some examples? Yes, no, absolutely. So we've taken the view that we want to transform our key processes within the company. And the examples of these processes, they're not typical to us. They're typical of any large enterprise. You know, these could be procurement, supply chain, marketing, research, data. So we've got these end-to-end processes which we are now transforming through the use of AI and blockchain, these kinds of technologies so that we're able to then use those as showcases. So in terms of examples of how we're making use of these today, I'll give you some examples that are more, you know, just at a whole process level. For instance, supply chain, trying to understand what are the risks to our supply chain based on emerging weather conditions, based on emerging political events, trying to unravel all that and then essentially use an intelligent system to guide us to make the best decisions with regard to supply chain. That's kind of what I would call a process level example. I'll give you one example within data which seems to some extent quite trivial, but actually there are literally thousands and thousands of such decisions that are made every day in a large enterprise. So one of the things that we do in my organization is try to understand if a client that we're dealing with is a government-owned entity. And since we operate globally and there are rules that regulate how one deals with government-owned entities, it's very important for us to get it right so that we do business ethical. And it's, you know, you might think, well, that's a simple decision. It's actually quite complicated and a lot of different parties have a stake in the ground on this, you know, the legal department, the sales area, but now the way the process is transforming is all that input is fed into an intelligent system that has an understanding of what we've done in the past. It can look at the external data, the newsfeeds that are available about that organization, as well as what are the different points of view, and then come to an understanding and then finally be able to explain back to us its rationale as to why it considers something a government-owned entity or not. So every subject matter expert in the company should be able to make use of this technology. That's what an AI enterprise is. And there literally are thousands and thousands of such people within an enterprise. I mean, putting real complex data at their fingertips almost as easy as putting numbers on a spreadsheet. That's the kind of work that you guys are thinking. The way I would put it to you, it's more in the sense of engaging the subject matter expert in a dialogue. So it's like you've got this intelligent system, Watson, that's working with this subject matter expert, taking them through the whole scenario. They come in with a use case in mind. I use the example of government-owned entity or risk insight for supply chain. They're coming in with a use case in mind. System is guiding them through. Here's the internal data that's relevant. Here's the external data that's relevant. Here's how you can link them. Here are the insights that you can draw from them. So it's kind of a two-way street, but it just ends up being a much more accurate decision, made much more quickly. Ginny's talk and speech and the theme here I think in 2018 is putting smart to work. I'll edit that for you in our conversation. Putting smart data to work. Because that's what you're getting at here. How do you make data intelligent? I know, you know, I mean, you can look at it, we can kind of go in the high levels in the clouds and look down and say, yeah, you know, that's a great mission. It's hard as heck. It's very hard. So you've got intelligent data. Is it the right data? Is it contextually relevant? Is it in the right place at the right time? Is the application of the ability to ingest and use the data? So, I mean, we said sorry. You know how reliable it is. All that stuff comes into play and that's where I think, you know, we've thought of IBM as having a very large portfolio of products that span from, you know, data management, data quality, those kinds of things all the way to AI and Watson and so forth. Think of it more now as bringing together that portfolio into a cohesive data and cognitive framework, data and cognitive backbone for the enterprise. And that's really essentially what we're putting together. Interpol, I want to get your thoughts on something. I'm kind of going on tangent since it's just popped in my head. I wrote a blog post in 2007, way back in the day, 10 years ago, that said, data is the new developer kit. And it's kind of a riff on that data is going to be the software. So, we're seeing that now. I interviewed Rob Thomas earlier. We saw him on data containers. He's starting to get to that level with Kubernetes and these cloud technologies. You now have new models emerging around data where people want to act on data whether it's a subject matter expert or a developer, they are essentially developed users. So, data's got to be programmable. It's got to be accessible. How do we get to a world where it's being developed on in a seamless way just like software's developed on? Because most of the software, 90% of most software is open source, only 10% go to the Linux Foundation, is actually raw intellectual property. So, if you almost think of data the same way. Yes, no, no, no, no. How does it, just using data in a development context, what's your vision on that? So, you know, we have a blueprint to make an enterprise into an AI enterprise or a cognitive enterprise. And it has four elements to it. One of the elements is actually data for precisely the reasons that you just enunciated. You know, developers, if they have to go off and search for data and try to find it, then it's not a productive use of their time. So, to some extent, you have to bring the data ecosystem to them. And that needs to be part of an AI enterprise. That that data is readily available for developers so that they're able to harness that. And so, now you get into all the hard questions, right? How do you find it? What is the lineage of the data? So, you need to have a super catalog enterprise-wide that enables all that. And then we're making up a new category as we speak, it's called data ops. Data as code. We had DevOps as infrastructure as code. You know, I've been kind of, I was talking about this about a year ago, I didn't get any traction with the idea, but what's certain on my head was if infrastructure as code, which was DevOps, which is now serverless when you look at cloud computing, is a set of programmable resources, you can almost make the stretch that data as code is a similar nirvana. Yes. Okay, it's available. I'm not searching for it, but I don't need to reconstruct it. I don't need to essentially ingest it. Yeah, I'm ingesting it as a function, but in a free-flowing world. What's your thoughts on that? What's your reaction to that? Well, that's why setting up this central backbone for data and cognition is extremely important. And I think the right way to think about it is as a continuum. So, you've got data, and then you've got essentially APIs on top of the data that may be representing certain functions that you're running on the data. You think about that as a continuum because those functions end up with data as a result, right? So you've got derived data. So, what the backbone needs to be able to do is to give developers very quick access to all the raw data, the source data, as well as the derived data in terms that they can understand and it's easy for them to fathom what that is so that they're able to make judgments in conjunction with an intelligent system that guides them. That's the key thing. And that's why Jenny brought up Moore's Law and Metcalf's Law in her speech because she's intimating at two things. Faster, smaller, cheaper, performance improvements, Metcalf's Law is a network effect. Okay, so you know I'm going with this, right? So we're now in a network effect gamification world. We see blockchain, we see cryptocurrency, we see decentralized application developers coming on board very quickly. So you have a world with token economics becoming front and center. And where I see innovation, certainly ICOs, initial coin offerings are scamming right now, but it is highlighting the innovation and arbitrage of an inefficient capital market. So I just use that as a use case, but blockchain and cryptocurrency is an opportunity to create new business models from the enabling blockchain capability. How do you view that? Because we're still talking about data now. If you're freeing up more people to have more time to actually dupe their job, they're going to create new things, maybe new business models. Enter, say, token economics, combine with blockchain. This is really where we see a lot of great innovation. Your thoughts in this area of token economics. No, absolutely. So I think there are two ways to think about it. One is in the transaction of business itself. What you're doing is you're bringing in stakeholders for a particular business transaction and you're giving them a way to, a distributed way, a distributed way to arrive at the decision, right? As to whether or not to move forward. So distributed consensus. You're making that very easy and simple for them so that they can rapidly reach a decision and make their decision, whether they're going to put in money, take out money, et cetera. That's one aspect of it. And we literally have- And by the way, consensus is now a new data source. Yes. An active, real time. Yes. Data set. Absolutely, it is creating a data set in off its own right. So that's kind of one aspect of it which is in the transaction of business making it much more efficient, much faster and so forth. But I think it's also instructive to look at blockchain and apply it in terms of a secondary use to the process of managing data itself. So to the extent you're able to establish identities, to the extent you're able to establish permissions and roles, it's going to make governance of data much easier and much faster and much more efficient. These are typically very hard problems for enterprises to solve. But I would say that as you go forward, maybe in this year or next year, you're going to see examples. And the opportunity too is to actually break down some structural barriers. Yes. With this new technology. Absolutely. It's the bulldozer of innovation. Not easy, but there is a path. You guys have, well, I hope there's 100 customers with blockchain. And there's a data store. It's supply chain, blockchain, value chain, chain activities. Interesting. It's going to just lead to a lot more efficiency and accuracy as we move forward. It's your Paul Mandari, global chief data officer here on theCUBE, sharing his insights on data. We didn't even get to the good part around social data and graphs and all that great stuff that we love talking about. But more CUBE coverage is going to continue here. Day two coverage of IBM, I'm John Furrier. Thanks for watching.