 from Fisherman's Wharf in San Francisco. It's theCUBE covering IBM Chief Data Officer Strategy Summit, Spring 2017, brought to you by IBM. Hey, welcome back everybody. Jeff Frick here at theCUBE. It is lunchtime at the IBM CDO Summit. Packed house, you can see him back there getting their nutrition. But we're going to give you some mental nutrition. We're excited to be joined by a repeat performance of Courtney Abercrombie. Coming on back with BJ Shankar. He's a GM, Cognitive IoT and Analytics for IBM, welcome. Thanks for having me. So first off, did you eat before you came on? I want to make sure you don't pass out or anything. Courtney and I both managed to grab a quick bite. Excellent. So let's jump into it. Cognitive, a lot of buzz, IoT, a lot of buzz. How do they fit? Where do they mesh? Why is it so important one another? Excellent question. IoT has been around for a long time. Even though we never called it IoT, right? My favorite example is smart meters that utility companies use. So these things have been here for more than a decade. And if you think about IoT, there are two aspects to it. That is the instrumentation, right? Putting the sensors in and getting the data and the insights aspect, which is making sense of what the sensor is trying to tell us. Combining these two is where the value is for the client, right? Just by putting arbitrary sensors, that doesn't make much sense. So look at the world around us now, right? The traditional utility. I will stick with the utilities to complete the story. Utilities all get disrupted from both sides. On one hand, you have your electric vehicles plugging into the grid to drop our. On the other hand, you have supply coming from like solar roofs and so on. So optimizing this is where the cognitive and analytics kicks in. So that's the beauty of this world. All these things come together, that convergence is where the big value is. Right, because the third element that you didn't have in your original one was what's going on, what should we do? And then actually doing something, right? Exactly. You got to have the action. It pulls all together. And learning as we go, right? The one thing that is available today with cognitive systems that we did not have in the past was this ability to learn as you go. So you don't need human intervention to keep changing the optimization algorithms. These things can learn by itself, right? And improve over time, which is huge. So, but do you still need a person to help kind of figure out what you're optimizing for? That's where, can you have a pure machine-driven algorithm without knowing exactly what are you optimizing for? We are nowhere close to that today, right? So, general AI, where the system is super smart by itself, is a far away concept. But there are lots of aspects of specific AI, optimizing a given process that can still go into this unsupervised learning aspects. But it needs boundaries, right? System can get smart within boundaries. System cannot just replace human thought, right? Just augmenting our intelligence. Right. Courtney, you're shaking your head over there. I'm completely in agreement. We are nowhere near. My husband's actually looking forward to the robotic apocalypse, by the way. So, he's the opposite of me. I love people. He's like looking forward to that. He's like, the less people, the better. Unless I'd have a Zumba, whatever those little vacuum cleaner things. Yeah, a Zumba. Tell them that the fewer people are better. The fewer people are the better for him. He's a finance guy. He'd rather just sit with the money all day. What does that say about me? Anyway, that's my digress. But yeah, no, I think we're never going to really get to that point, because we always, as people have to be training these systems to think like us. So, we're never going to have systems that are just autonomically out there, without having an intervention here and there to learn the next steps. That's just how it works. Well, I always like the autonomous vehicle, just example, because it's just so clean. If somebody jumps in front of the car, does the car hit the person or run into the ditch, where today, a person can't make that judgment very fast. They're just going to react. Yep. Right. But in computer time, that's like forever. So you can actually make rules and then people go bananas. Well, what if it's a grandma on one side and kids on the other? Right. Or what if it's a criminal that just robbed a bank? Do you take them out on purpose? So, you know, you get into a lot of interesting parameters that have nothing to do necessarily with the mechanics of making that decision. Yeah, and this changes the fundamentals of computing big time too, right? Because the car cannot wait to ping the cloud to find out, you know, should I break or should I just run over, you know, this person in front of me? So it needs to make that determination right away. Very quick. And hopefully the right decision will just break. Right. But on the other hand, all the cars that have this algorithm together have collective learning which needs some kind of cloud computing, right? So this whole idea of edge computing will come and replace a lot of what exists today. So see this disruption even behind the scenes on how we architect these systems. It's a fascinating time. And then how much of the compute the store is at the edge? How much of the compute the store and the cloud and then depending on the decision. Yeah. Like you said, can you do it locally or do you have to send it upstream or break it in people? I mean, if you look at a car of the future, forget car of the future, car of the person like Tesla, that has more compute power than a small data center. Multiple CPUs, lots of RAM, lot of hard disk. It's a little cloud that runs on wheels. Well, it's a little data center that runs on wheels. Let me ask you a question and here's a question. We talk about systems that learn, cognitive systems that are constantly learning and we're training them. How do we ensure that Watson, for example, is constantly operating in the interest of the customer and not the interest of IBM? Now there is a reason I'm asking this question because at some point in time, I can foresee some other company offering up a similar set of services. I can see those services competing for attention. As we move forward with increasingly complex decisions and with increasingly complex sources of information, what does that say about how these systems are going to interact with each other? I'll explain. He's always with the loaded questions today. It's an excellent question. It's something that I worry about all the time as well. So then we worry about with our clients too. Couple of approaches by which this will exist. To begin with, while we have the big lead in cognitive computing now, there is no hesitation on my part to admit that the ecosystem around us is also fast developing and there will be hefty competition going forward, which is a good thing. Because if you look at how this world is developing, it is developing as APIs, right? So APIs will fight on their own merits. So it's a very pluggable architecture. If my API is not very good, then it will get replaced by somebody else's API, right? So that's one aspect. The second aspect is there is a difference between the provider and the client in terms of who owns the data. We strongly believe from IBM that the client owns the data. So we will not go in and do anything crazy with it, right? We won't even touch it. So we will provide a framework and a cartridge that is very industry specific. Like for example, if Watson has to act as a call center agent for a telco, we will provide a set of instructions that are applicable to telco. But all the learning that Watson does is on top of that client's data. We are not going to take it from one telco and put it in another telco. That will stay very local to that telco. And hopefully that is the way the rest of the industry develops too. That they don't take information from one and provide to another. Even on an anonymized basis, this is a really bad idea to take a client's data and then feed elsewhere. That it has all kinds of ethical and moral consequences, even if it's legal. Absolutely. And we would encourage clients to take a look at some of the others out there and make sure that that's the arrangement they have. Absolutely. What a great job for an analyst firm, right? But I want to build upon this point. Because I heard something very interesting in the keynote, the CDO of IBM in the keynote this morning. He used a term that I've thought about but never heard before. Trust as a service. Are you guys familiar with his use of that term? Yep. Okay, what does trust as a service mean? And how does it play out? As a consumer of IBM cognitive services, I have a measurable difference in how I trust IBM's cognitive services versus somebody else. Some would call that blockchain. In fact, blockchain is often called trust as a service. And blockchain is probably the most physical form of it that we can find at the moment, right? A distributed ledger where it's open to everybody but then no one transaction can be tapped by somebody else. But if we extend that concept philosophically, it also includes a lot of the concept about identity. Identity, right? I as a user today don't have an easy way to identify myself across systems. Like if I'm behind the firewall, I have one identity. If I'm outside the firewall, I have another identity. But if you look at the world tomorrow, where I have to deal with a zillion APIs, this concept of a consistent identity needs to pass through all of them. It's a very complicated and difficult concept to implement so that trust as a service, essentially the like blockchain, that needs to be an identity service that follows me around, that is not restricted to an IBM system or an Oracle system or something. But at the end of the day, blockchains and mechanisms. Yes. Trust and service sounds like a... It's a transparency. It is. A more transparency and a more trust. It's a way of doing business. Yes. Sure. So is IBM going to be a leader in defining what that means? Well, look, in all cases, IBM has, we have always striven, what's the right word? Striven, strove, whatever it's strove. Strove, thank you. To be a leader in how we approach everything ethically. I mean, this is truly in our blood. I mean, we are here for our clients and we aren't trying to just get them to give us all of their data and then go off and use it anywhere. You have to pay attention sometimes that what you're paying for is exactly what you're getting because people will try to do those things and you just need to have a partner that you trust in this. And I mean, I know it's self-serving to say, but I mean, we think about data ethics. We think about these things when we talk to our clients and that's one of the things that we try to bring to the table is that moral, ethical, should you? Just because you can and we have, just so you know, walked away from deals that were very lucrative before because we didn't feel it was the right thing to do and we will always, I mean, I know it's self-serving. I don't know how. You won't know until you deal with us, but pay attention, buyer beware. Just Courtney from IBM, we know what side you're on. I'm just, I'm not in the street. Believe me, if I'm associated with it, it's, yeah. But you know it's a great point because the other kind of ethical thing that comes up a lot with data is, do you have the ethical conversation before you collect the data and how you're going to be used? But that's just today. You don't necessarily know what's going to, you know, what and how that might be used tomorrow. And that's when it's really tricky. Future proofing is a very interesting concept. For example, vast majority of our analytics conversation today is around structured, unstructured, you know, those kinds of terms. But where is the vast majority of data sitting today? It is in video and sound files, nothing more scary. It is significantly scary because the technology to get insights out of this is still developing. So all these things like cluster and identity and security and so on, like it need, and quantum computing for that matter, right? All these things need to think about the future where some arbitrary form of data can come hit you and all these principles of ethics and legality and all should apply. That's a very non-trivial challenge. But I do see that some countries are starting to develop their own protections, like the general data protection regulation is going to be a driver of forced ethics. And some countries are not. And some countries are not. I mean, it's just like, cognitive is just like anything else. When the car was developed, I'm sure people said, hey, everybody's going to go out killing people with their cars now, you know? But it's the same thing. You can use it as a mode of transportation or you can do something evil with it. It really is going to be governed by the societal norms that you live in as to how much you're going to get away with. And transparency is our friend. So the more transparent we can be, things like blockchain, other enablers like that that allow you to see what's going on and have multiple copies, the better. All right, well Courtney, Vijay, great topics. And that's why gatherings like this are so important to be with your peer group, you know? To talk about these much deeper issues that are really kind of tangential to the technology that really core to the bigger picture. So keep getting out on the fringe to help us figure this stuff out. I appreciate it, thanks for having us. All right, I'm Jeff Frick with Peter Burris. We're at Fisherman's Wharf in San Francisco, the IBM Chief Data Officer Strategy Summit 2017. Thanks for watching.