 Well, those lights are really bright. Has anybody else commented on that? My name is Dr. Aman Chaudhary. And as mentioned, I lead responsible AI at Accenture. And what I'm here to talk about is moral outsourcing and the role of humanity in the age of artificial intelligence. So these are some stats that we like to spew to clients. But they're all true. 85% of interactions online managed with AI by 2020 and industry worth over $100 billion by 2025. And most importantly, people throw around this term that the AI it will lead in the Usher and the Fourth Industrial Revolution. So we hear all these amazing things about the potential. But then we hear these headlines that look kind of like this. AI robots are sexist and racist. And this is actually my personal favorite. How to avoid racist algorithms. So as a woman, there's some sort of a maze I can spray on algorithms, and they will not be racist. That would be really great, right? But this is a very real phenomenon. I'm joking. But at the same time, we know there are some serious problems with bias in data, with design of our algorithmic systems. And not just from this one-to-one sense, but if we think about these systems that grow, scale, and perpetuate these biases, they become embedded in labyrinthine systems that are only decipherable to a few. So I'm also currently in pure conspiracy theory mode, because I'm reading Shoshana Zuboff's The Age of Surveillance Capitalism. If you have not picked it up, please pick it up. Probably something not good for somebody already working in ethics and AI to read. But it is an eye-opening book, and saying this as somebody who does this for a living. It is an eye-opening book. But let's think through a little bit about these terrible things that we keep hearing about. So we see magazine covers like this of the robots and people completely out of jobs. And we see a lot of terminator imagery to the point that the people in the ethics of AI space, we jokingly keep tabs of, I call, androgynously hot robots, one of which I have in this deck, and also a terminator, because it is so far from what artificial intelligence actually is today. But it's interesting to know that this is the imagery we're being fed. And we wonder what future we're heading into. So we're building all this wonderful technology, those of us who live in Silicon Valley and actually around the rest of the world. We see it with this sort of techno-optimism, this hope, this idea that we build it and it will be used for good. And yet this is what the media gives back to us. It gives back to us images of mass joblessness. It gives back to us images of people being discriminated against. If you are following ProPublica, they do a lot of really great work. And they have data scientists on staff that look at bias and algorithms. And from last year, them investigating the compass algorithm that was developed by a company called North Point to help private prisons determine whether or not prisoners should get parole. Now the only reason, because these are opaque models built by a private company, the only reason we even know how these algorithms were in some way developed was because we actually had to have court cases that required them to release a documentation about what they've built. So we are building these things. People are building these things at scale. They are building them not being totally mindful. So again, as I mentioned, we wonder what future we're heading into and what we can do about it. So I'm going to get a little philosophical in this room. So dirty secret, I'm a social scientist by background. I'm a quantitative social scientist. My PhD is in political science. And I actually have always viewed the field of data science as a quantitative social science. Why? Because we take data and we understand human behavior. The point is not the data. The point is actually the human behavior. And as we see these systems grow and perpetuate and in our eyes commit bad acts at scale, we wonder why and how we got here. So this image you may recognize by looking at the crowd. So post-World War II, there was a lot of discomfort. Would be a way to put it. Among philosophers, among human beings in general, just to think about how entire nations of individuals became complicit in genocide, literally the genocide of their neighbors. And a philosopher, Hannah Arendt, wanted to understand what was in the minds of these people. And what she did was go to the Nazi trials in Jerusalem. You can read her book, Eichmann in Jerusalem. And she coins this term, the banality of evil. And what her discovery was, in essence, we always want there to be a Hitler. We want there to be a bad person, a bad person who is the person in charge of all the bad things happening. But bad people or bad actions are often just simply enabled by bureaucratic indifference or by masses of people who think that I may contribute to a system, but it is not my fault because I am not the one who directly did this. So as she's watching Adolf Eichmann and also people like Adolf Eichmann being interviewed by this court, they're genuinely surprised sometimes. And their feedback is often, well, I just did my job. I didn't really kill the Jews. I just ordered gas tanks. I just helped build fences. I just patrolled outside concentration camps. But I was not the one who killed the Jews. So all this is to say that evil requires systems of indifference. Evil requires people to, quote, just do their jobs, be another cog and machine, another step along the way. But ultimately, that enables something bad to happen downstream. So how is this related to what we're talking about? So let's revisit those headlines. It was really interesting when we look at how we have linguistically, semantically structured these headlines. We say AI robots are sexist and racist, and how to avoid racist algorithms. The one thing you actually don't have here is a human being being mentioned. We have linguistically erased the human being from responsibility. Somehow, we have decided that these algorithms that we've built have the properties and we modify them linguistically as if they are real and they're alive. We would never say my car or my laptop is sexist or racist, and yet we ascribe this sort of behavior onto artificial intelligence systems. And in doing so, as I mentioned, we write ourselves out of the equation. And actually, that's kind of on purpose, because it's sort of beneficial to us as programmers, as developers of artificial intelligence, if we are not mentioned in this sentence and we are not culpable. We build something and then we say, well, the technology did it. I didn't do it, especially with artificial intelligence. This notion of learning from your environment and evolving and growing, et cetera. I do think we tend to over exaggerate the designer, the engineers, and the developers' role in creating objective functions for AI. But you've noticed how linguistically we've written ourselves out of the equation. So why is that, I suppose, beneficial for us? So a few years ago, predictive policing, well, even today, it's more of a discussion, but a few years ago, there was a presentation on the use of predictive, using algorithms to determine where to deploy police officers. So in a nutshell, why predictive policing is problematic. Our measurement of crime is not a true measurement of crime. It is a measurement of the crime that has been picked up in a neighborhood, which in the US and in other parts of the world is not often a function of, it's not randomly distributed. There is pattern to it and that pattern is racist. So when we create algorithms to determine where police officers should be deployed, guess what we do? We take that racist and incomplete data and send police officers to places where they will guess what, do their jobs, and arrest other people. Therefore, increasing community harassment in other places while still systematically ignoring the places in which there may be an equivalent amount of crime, but they just don't bother to go. So all this is to say, at one of the conferences a few years ago, we had an engineer, we had some folks on stage presenting a predictive policing algorithm and then somebody asked, well, aren't you worried about how this tool would be used? And the scientist who was presenting said, well, I don't know, I don't know how it would be used. I am just an engineer. So at Accenture, it's interesting because we face this problem all the time. And as a tech consulting company, we are actually in the unique position to not just design the technology, but also to start executing control over how it is deployed and used. And I think that's the part of the equation that we as technologists have not quite figured out yet. We can create an algorithm, or we can put data and create a model for a particular use, but then the problem is once it is out there in the world, once it is open source, we cannot always control how it is used. So moral outsourcing. So again, when we use terms like racist algorithm, we erase human responsibility in our language, and we purposely anthropomorphize the AI. We give it a face, we give it a body, even though most artificial intelligence doesn't look anything like that. We do that in order to shift the negative consequences of the blame from the humans to the algorithm. And here's the kicker. The problem with moral outsourcing, it actually ends up feeding our human paranoia. So like if we walk through the narrative, we have decided artificial intelligence is anthropomorphic, like it takes action on its own, it has some sort of free will. We have effectively written ourselves out of the language when we refer to them, so racist and sexist algorithms, and oh, by the way, the only time we refer to them this way is when they do bad things. So when AlphaGo defeated Lisa Adele, we always make sure to mention it was DeepMind's AlphaGo, right? So we give the positive benefit to the creators linguistically when it does a good thing, or it's a win for humanity. When it is sexist and racist, we all linguistically step away. And paradoxically, that creates our fear. This is what leads to our terminator imagery. This is what leads to us thinking that a physical robot will actually take our physical jobs and stand in a store in stock shelves or whatever, right? As ridiculous as this may sound. So what can we do? And here's where I talk a little bit about governance, which I know is an interest of a lot of the community in this room. So you may have heard a few weeks ago about open AI creating a language generation algorithm that they felt was so potentially harmful they did not release it. And I will tell you that actually the ethics in AI community, largely our response, actually this article was about the response of us pretty much saying who the hell do you think you are? Right, because good governance has not come from hiding. Good governance has not come from unilaterally deciding that you are the person who is the arbiter of what should and should not be released. And that was, you know, and with all respect to open AI, I know their intention was good, but the response in the community was I don't think that was the best thing for you to do. Announce that you've built this thing that was so potentially harmful, but oh by the way, we're not gonna let you know what it is. And this is a struggle that the ethics in AI community largely is having, which is why political scientists, beneficial, right? If business leaders and technologists try to create governance and democracy, it will get fucked up. Because often it is viewed as being this top down methodology where we'll have the leaders at the top who will create rules and said rules will get enforced. But actually, for people who have studied democratic systems, that is not actually how democratic countries work. We operate by a rule of law, and a rule of law is a social contract. So this is kind of the equivalent of when you're going up an escalator, why do people stand on the right and walk up the left? This is just an implicit rule. You will not get fined if you don't do it, like people will give you dirty looks and someone will probably tell you to move over, right? It is socially enforced, it is actually not regulated, so why do we do it? Because we as a community, as a society, have actually agreed that this is the best way to do things to make society run more efficiently. So let's take this notion and put it into our world, right? So how do we create good governance of AI systems? Well, one, telling the world you built a terrible thing and hiding it, probably not the best way to do it, right? So, there you go. There you go. So what can we do? One is community-led governance, which I appreciate is much, much, much easier said than done. So we are actually the next few weeks releasing a governance toolkit. And again, as I mentioned, I think there's a naive way that people are looking at these things to think that the powers that be will create a tone from the top and then it will kind of filter down, like trickle down ethics. And what we have added in our toolkit is something I called constructive dissent, right? We're one of the biggest problems we're seeing in tech companies. And there are many reasons we have employee whistleblowers, walkouts, et cetera. But one of those I strongly believe is that we actually don't have good channels of dissent within organizations. We do not incentivize people to take the right actions and do the right things structurally. We also don't have the right kinds of channels for people to speak up and to feel safe speaking up. So in the media in general, there's an inordinate amount of weight placed on the power of a data scientist. And having been a entry-level data scientist myself, we don't have a lot of power. We have deadlines to meet. We have data sets to clean. Yes, we do make some sorts of decisions, but we are not gods. We are answerable to other people. So what we actually need to do is create governance from the bottom up and then cultural norms of dissent. And this is a really important thing. And this is why I raised the whole concept of the escalator and sort of standing on the right and walking up the left, right? We actually do need norms in our own community. So what are the other communities we can look at? Well, we can look at the bioethics community, right? In the 90s, when all of this talk of genetic testing was going on and there were decisions being made about not doing testing on human embryos, et cetera, which I suppose has been broken. But it was fascinating to see that globally, the sort of agreement come to, an agreement on this cultural norm of what we should or should not do, right? And we don't actually have that culture among data scientists. And to be fair, it is actually quite different from a more centralized barrier to entry type community like biology or physics. And it is with data science because what we struggle with here is this desire and this need to democratize this education, this skill, so which is very, very valuable, right? So we do wanna make tools open to us. We do wanna make algorithms open source. At the same time, how do we take the responsibility for potential misuse? Who is responsible for it? And how do we not then recreate this HANA aren't type world or our fears from World War II where we have all inadvertently actually led to a very bad outcome? And that's really the concern and really governance is the best answer for it. And again, community-led governance from the bottom up and most importantly, creating cultural norms of dissent and cultural norms of compliance. I think it's quite important in our own world, in our own community to really ask people to do the right thing and to reward doing the right thing. And again, like in rank and file private organization data science, we don't actually have those norms yet. And while we may love this idea of the data science as a bit of like a Wild West kind of crazy person on the fringes out there kind of job, there is something to be said about standards, something to be said about creating auditability and traceability of algorithms of decision-making because ultimately somebody has to be held accountable. There is just significant issues with bias, significant issues with algorithms and artificial intelligence being put out there today. So my ask of you, the open source community is how do we start creating these norms? How do we start cultivating this community? And there's plenty of people in this room that have been working on this kind of thing that have been working on this kind of thing. I personally would actually really love to engage with you as we try to build out these norms in corporations and at the enterprise level because it is sorely needed, not just in technology companies. The companies that are actually the most interested in this are the non-technology companies, retail organizations, public sector banks, as they use more and more artificial intelligence, they're actually kind of scared of our world because we don't have norms and they tend to. We don't tend to follow rules. So how do we create this culture? How do we create an ethical culture that can then actually be permeated beyond technologists into other parts of industry and really achieve the artificial intelligence world that we're all trying to build? So with that, my time is up. If you have some time, follow me on Twitter or you can check out my website. Thank you. Thank you. You know, it's funny, in the open source community for a long time, one of the community norms that is important to coders is the idea of complying with licenses. So licenses are like a big part of the open source community. There's copy-left license, there's permissive licenses. Particularly in Linux, which I was involved in early on, people would kind of float the license, right? Particularly companies who would use it, maybe through ignorance, sometimes intentionally. And the way we helped create that norm around the license was explaining the value of complying with it in a business sense. Like here's the business value. That's how we kind of started changing it, but it's been like a huge experiment. We keep working on it. And there actually is an initiative to create a responsible AI license. Interesting. I think it's like some folks at IBM and some other, it's sort of in the work. So I think that we actually are kind of headed in that direction. If we can help in any way, please let us know. Sure, cool. Thank you. Thank you.