 From around the globe, it's theCUBE with digital coverage of IBM Think 2021, brought to you by IBM. Okay, we're back with our coverage of IBM Think 2021. This is Dave Vellante and this is theCUBE. We're now going to dig into AI and explore the issue of trust in machine intelligence. And we're very excited to welcome longtime CUBE alum, Seth Dobrin, who is a global chief AI officer at IBM. Seth, always good to see you. Thanks so much for making time for us. Yeah, always good to see you, Dave. Thanks for having me on again. It's our pleasure. Look, language matters. And IBM has been talking about language, automation and trust. And I know you're doing a session on trustworthy AI. Can you talk about trust in the world of machine intelligence and AI? What do we need to know? Yeah, so, as you mentioned, language matters, automation matters. And to do either of those well, you need to really trust the AI that's coming out of your models. And trust in the AI really means five things. I think of it as there's five pillars. The AI needs to be transparent. The AI needs to be fair. It needs to be explainable. It needs to be robust. And it needs to ensure privacy. And without those five things all combined, you don't really have the ability to trust your AI either as a consumer or even as an end user. So imagine I'm a business user that's supposed to use this cool new AI that's going to help me make a decision. If I don't understand it and I can't figure out how it got to a decision, I'm going to be less likely to consume it and use it in my day-to-day work. Whereas if I can really understand how and why it got to a decision and know that it's protecting the ultimate end user's data, it can be a lot more easy, a lot more likely that I'll end up using that AI. But there is the black box problem with AI. And how, is that a technical issue? I mean, how do you get around that? I mean, it seems non-trivial. Yeah, so I think solving the black box problem of AI specifically of either complex traditional machine learning models or what we think of as deep learning models is not a trivial problem, but IBM and others over the years have been really trying to tackle this problem of explainable AI. And we've come to a point in the world now where we really believe that we have a good handle on how to explain these black box models. How do you basically interpret their results and explain them from the end point and understand what went into each decision at each layer in the model, if it's deep learning model, to kind of be able to extract why and how it's making a certain decision. So I don't, three years ago, it was, we thought of it as an intractable problem, but we knew we'd be able to solve it in the future. I think we're at that future today where unless you get into something incredibly complex, we can explain how and why it got to a decision. And we do this through various sets of tooling that we have, some are open source. We're so committed to explainable and fair and trust when it comes to AI that probably half of what we do is out of the open source community in the form of what we call our AI fairness through 60 toolkits. Right, thank you for that. So let me ask you another sort of probing question here. Is there a risk, I mean, people talk about there's maybe a risk of putting too much attention on trust. I mean, early days in the AI journey and people are worried that it's going to stifle projects, maybe slow down innovation or maybe even be a headwind to AI adoption and scale, what are your thoughts on that? You know, I think it's a slippery slope to say it's too soon to ever be concerned about fairness, trust, privacy, bias, explainability. You know, what we think of as trustworthy AI. I think if you can do very interesting and very exciting and very innovative and game-changing things in the context of doing what's right. And it is right, especially when you're building AIs or anything that actually impacts people's lives to make sure that it's trustworthy, to make sure that it's transparent, it's fair, it's explainable, it's robust and it ensures the privacy of the underlying data in that model. Otherwise, you know, you get into a point where you may be able to do cool things, but those cool things get undermined by previous missteps that have caused the industry or the tools or the technology to get a bad rap. I mean, a great example of that is, I mean, look at the conversational AI that were released in the wild in Twitter and Facebook without any kind of thought about how do you keep them trustworthy? You know, that went really bad really quick and we want to make sure that, you know, we don't, you know, IBM, the consumer-facing company were kind of the IBM inside, if you will, right? We want to make sure that when, you know, when the world's largest companies are deploying IBM's AI or using IBM's tools to deploy their own AI that it's done in a way that gives them the ability to make sure that things don't go off the rail quickly because we're not talking about a conversational Twitter bot. We're talking about potentially, you know, an AI that's going to help make a life-changing decision like, do you, you know, do we give Dave a mortgage? Do we let Dave out of jail? Is he likely to, you know, to recidivate? You know, is he, you know, things like that that are actually life-changing and not just going to be embarrassment for the company? It's a great point. It's important to keep track up front. Great point you make. I mean, you're right. And it did turn bad very quickly and it's not resolved. I mean, a lot of the social companies are saying, well, government, you figure it out. We can't. So let's bring it back to the enterprise. That's what we kind of, I'm interested in where IBM's main focus is right now and where do you see it going? I mean, you mentioned, you know, things like recidivism and mortgages. I mean, these really are events that you can predict with very high probability. You know, maybe you don't get it a hundred percent right, but it really is world-changing in many ways. Where's the focus now and where do you see it headed? Yeah, so I think the focus is now or has been for a little while and continues to be and probably will be in the future on augmenting intelligence. So especially when it comes to life-changing decisions, we don't really want an AI making that decision independent of a human. We want that AI guiding the decisions that humans make. And, you know, but reducing the universe of those decisions down to something that's digestible by a human. And also at the same time, using the AI to help eliminate biases, the cognitive biases that may exist within us as humans. So when we think about things like bias, we have to remember too that the math itself is not where the bias is coming from, right? The bias is coming from the data. And the bias in the data comes from prior decisions of humans that were themselves done for bias reasons, right? And so by leveraging AI to remove the bias from the decisions that are surfaced to humans, it helps eliminate some of these things. So for instance, you know, back to the mortgage example, right? If we look at the impact of redlining, right? Where people in certain zip codes or certain addresses, certain areas didn't get mortgages, that redlining still exists in the data. I don't know of a single mortgage provider today that wants to have that in part of their day-to-day practice, this helps remove that from them and surface a decision that's based on the context of the individual based on their, you know, their ability to repay in the case of a mortgage as opposed to what they look like or where they live. So I mean, I like the concept of the combinatorial power of machines and humans, but I think there's, well, there's all, I wonder if you could sort of educate me on that. I mean, there seems to be a lot of potential use cases for many companies, but IBM as well, for inference, you know, at the edge. I mean, everybody's talking about the edge, open shift obviously is a big play there, hybrid cloud. So how do you see that kind of real-time inference playing it? Is that certain data comes back to the model and that's where the humans come in? How do you see that? Yeah, so I often get asked, what do you see as the future of AI and my response is the future of AI is edge. And the reason for that is if I can solve for an edge use case, I can solve for every use case between the edge and the data center, right? And some of the challenges that you brought up as far as being on the edge, get back to, and it actually helps address other problems too, such as how do I handle data sovereignty regulations when it comes to training models and even model themselves. You think about, I have 50 models deployed around the world, there are 50 versions of the same model deployed around the world at different scoring endpoints or different places where I'm inferencing. How do I, without having to bring the model back or all the data back, how do I keep all those models in sync? How do I make sure that back to the social media example that one of them doesn't go completely off the rails? And we do this through federated learning, right? And this is or distributed learning, federated learning, whatever you want to call it. It's this concept of you have models running at discrete edge locations or discrete distributed locations. Those models over time, learn from the data that they're scoring on. And instead of sending the data back to retrain or even the model back to retrain, you send back the changes in the parameters that have been updated in that model. You can then put all those together, say you have 50 different distributions that you're managing. You pull all those together and you can even assign weights to the different ones based on bias that might exist, not biases, but different distributions that may exist at one node or another. You can do it based on the amount of data that's been going into, gone into changing those weights. And so you combine these models back into a single model and then you push them back out to the edge. So I just thinking about, I mean, because most of the work in AI today, correct me if I'm wrong, is in I would say modeling versus inferencing. But if you're laying out a future where that's going to change, and if I think about some of the things that we're familiar with today, things like fraud detection, maybe weather, supply chains, and that's just going to get better and better and better. But I think about some of the areas, and I'm curious if you could maybe talk to, some of the use cases you're seeing in examples, both today and in the future. But I think of things like smart power grids, smart factories, automated retail. I mean, these seem like wheel houses for IBM. So maybe you could share with us your thoughts on that and some other examples. Yeah, so you brought up fraud. Fraud is a really good example of an edge use case that might not seem like an edge use case. And so as you're swiping a credit card, let's just focus on credit card transactions, most of those transactions occur on a mainframe. And they need a response time that's less than a millisecond. And so if I'm responsible for making sure that my bank doesn't have any credit card fraud and I have a model that's going to do it, I can't have, in this case, the mainframe call out to someplace else to score the model and then come back. And this gets back to the power of hyper cloud. And so if I can deploy that model on my mainframe, where those transactions are happening, half a millisecond at a time, I can then score every single transaction that comes back and on the mainframe directly without having to go out, which enables me to keep my SLA, enables me to keep that less than half a millisecond response time, while also preventing any fraudulent transactions from happening. That's a great example of what I would call, that falls into my everything is edge bucket, right? Where, yeah, you're training the model somewhere else where you don't have the cost of running it, training it on the mainframe, but I want to score it back there. And we've actually done this with a couple of banks where we've trained models in the cloud on GPUs and done the inferencing and scoring on the mainframe for just exactly that for fraud. Edge, if you can make it there, you can make it anywhere. Seth, we got to leave it there. Really appreciate your time. All right, great to have, great for having me. Thanks, Dave. I appreciate it as always. It's always great to see you. I hope we can see you later this year, face to face or at least in 22 and thank you. I hope so. Yeah, let's, let's make that happen. Seth, we'll virtual shake on it. Thanks everybody for watching our ongoing coverage of IBM Think 2021. The virtual edition, this is Dave Vellante for theCUBE. Keep it right there for more great content from the show.