 From the Salesforce Tower in downtown San Francisco, it's theCUBE, covering Accenture TechVision 2019, brought to you by SiliconANGLE Media. Hey, welcome back, everybody. Jeff Frick here with theCUBE. We are live in downtown San Francisco, the Salesforce office, the brand new Accenture Innovation Hub. It's a grand opening. I guess they had a soft opening but we had the ribbon cutting. We were presenting the Accenture Technology Vision tonight 2019 and we're excited to have somebody who's not a technologist, who's very important to technology. She's Dr. Ruman Chaudhary. She's the Global Lead for Responsible AI at Accenture. I am. Great to see you. Thank you for having me on your program. Absolutely. So I'll do some background research on you and I love you. You introduced a lot of your talks about the fact that you're not a technologist. You come at this from a very, very different point of view. I do. So I am a social scientist by background. I've been working as a data scientist in artificial intelligence for some years but I'm not a computer scientist by trade. I come more from a stats background which gives me a different perspective. So when I think of AI or data science I literally think of it as information about people meant to understand trends in human behavior. So there's so many issues around responsible AI. We could talk probably to all these people, go home but we don't have too much. The first one is really a lot in the news right now about AI simply a codification of existing biases often unless you really take a very proactive stance to make sure you're not just codifying biases in software. What do you see? Absolutely. So we have to really think about two kinds of bias. There's one that comes from our data, from our models. This can mean incomplete data, poorly trained models but the second one to think about is you can have great data and a perfect model but we come from an imperfect world. We know that the world is not a fair place. Some people just get a poor lot in life. We don't want to codify that into our systems and processes. So as we think about ethics and AI it's not just about improving the technology it's about improving the society behind the technology. Another big topic I think that's really important is if you're doing a project and you want to think through some of the ethical issues should we be collecting this data? Why are we collecting this data? Why are we running these algorithms? And you make a decision, it's for a particular person. Purpose and the value outweighs the cost. But I think where the challenge really comes into is the next people that use that data or the next use that you don't necessarily have in mind and I think we hear that a lot in terms of kind of the complaints about the current state of big tech where everyone is doing their little piece but what happens over time is those get rolled into maybe bigger pieces that weren't necessarily what they were starting with in the first place. Absolutely, it's something I call moral outsourcing. So because what we build is often we feel like a cog in a machine. We feel as sometimes as technologists people aren't willing to take the responsibility for their actions even though we should be. If we build something that is fundamentally unethical we need to stop and ask ourselves just because we can doesn't mean we should. The thing about the implications on society right now there's often not enough accountability because everybody feels like they're contributing to this larger machine, who am I to question it and the system will crush me anyway. So we need to empower people to be able to speak their minds and have an ethical conscience. So I'm curious in terms of the reception of your message when you're talking to clients because clearly there's a lot of pressure to innovate fast, right? Everyone is telling everybody the data's the new oil and we've got to leverage these micro experiences et cetera, et cetera, et cetera. And they don't necessarily take a minute to step back and reflect is this the right thing? Is this the right way? Are we collecting more data than we really need to achieve the objective? So how receptive are companies to your message or they get it? Do they have to get hit side the head with some problem before they really understand the value? So I'll give you a phrase that everybody understands and then they get the point about things in AI. Breaks help a car go faster. Give, if we have the right kinds of guardrails, warning mechanisms, systems to tell us if something is going to derail or get out of control we feel more comfortable taking risks. So think about driving on the freeway. Because you know you can stop your car if the car in front of you stops abruptly you feel comfortable driving 90 miles an hour. If you could not stop your car nobody would go faster than 15. So I actually think of ethics in AI or ethical implementation of technology as a way of helping companies be more innovative. It sounds contradictory but it actually works very well. If I know where my safe space is I'm more capable of making true innovation. Right. So I want to take on another kind of topic which is really kind of STEM education versus not STEM or ethics. And it's interesting, right? Huge push on STEM. It's a very, very important thing that's going on now but as you look not that far down the road and this event's all about looking down the future reinventing the future as more and more of those kind of engineering functions are taken over by the machines. It seems like where the void is is really more talking about what are the implications? What are the deeper questions we should be asking? What are the ethics and the moral questions before just building a better mousetrap? Right. So you're raising a very hot button issue in the ethics and AI space. Is it simply enough to say all technologists should take an ethics course? I think it is very important to have an interdisciplinary education but no, I don't think one ethics course taken out of context in college will help you. So I think there's a few things to think about. One is that corporations need to have an ethical culture. It needs to be a good thing to be ethical. Number one. Number two, we need interdisciplinary teams. Often technologists will say, and rightfully so, how was I supposed to know thing X would happen? It's something very specific to a neighborhood or a country or a socioeconomic group and that's absolutely true. So what you should do is bring in a local community, the ACLU, some sort of a regional expert. So we do also need to move towards creating interdisciplinary teams. Right. So you brought up another really cool thing I think in one of your talks, in fact, fairness, accountability, transparency and explainability, which is a, nobody likes black box algorithms. So but fairness specifically is such an interesting concept. We all feel very slighted if we perceive things not to be fair. The reality is life is not fair. A lot of things are not fair. So as people try to incorporate some of these things into the way they do business, how can they do a better job? What are some of the things they should be thinking about so they can have the fate? Yeah, fairness is a very complicated, complex thing and I invite you or whenever someone asks what does it mean to be fair, I point them towards this really great talk from this conference called FATSTARP and it's called 21 Definitions of Fairness and it's all these different ways in which we can quantify and measure the concept of fairness. Well at Accenture, we took that talk and some other papers and created something called the fairness tool. So it's a tool to help guide discussion and show solutions on algorithmic bias and fairness. Now the way we think about it is not as a decision maker but a decision enabler. So how can you communicate as a data scientist to a non-technical person to explain the potential flaws and problems and then take collective action so the algorithm can help you make that decision but it's not automating the decision for you. So what it says is it helps smooth conversation and it helps pinpoint where there might be bias or unfairness in your algorithm. Right, well we don't have time tonight but at another time we're going to dig deeper into this and all the biomechanics and bio-engineering and a lot of great topics that you've covered and a number of your talks. So really enjoy getting to meet you and do terrific work, really enjoy it. Thank you, thank you very much. All right, thank you. I'm Jeff, you're watching theCUBE we're at the Accenture Innovation Hub in downtown San Francisco. Thanks for watching, see you next time.