 Hello. Welcome and thank you for being here. My name is Adlete Roman and I'm a PhD student at the University of Mannheim. I'm doing business ethics with a focus in technology and today I will talk about motivational appeals to overcome biases in technology design. Since there are so few of us, feel free to raise your hand and ask any questions if it gets too blurry, if it gets too complicated in the end. And yeah, I hope to get your feedback and comments and answer your questions as you have them. So today I will give a brief introduction on types of bias. I will start very broad. I will speak a little bit about algorithmic fairness, but that's not going to be the main topic of today. I'm actually just going to focus on the representational harms in artificial intelligence and then introduce you to my research design, my experimental design, our findings and listen in the implications that we found. So let's get to it. Computer assesses have identified three main types of biases. The first one, technical bias, which we hear often, which are the limitations and constraints of the technology itself, also could be bias data. And an example of this is Amazon's recognition system performing the worst for darker females. The second type and a very important one is the pre-existing social bias, which as its name says, it's bias that exists before deploying our model. And this is very important because if we don't understand the social context and the potential societies bias that the model will augment or amplify, our whole model will not be successful. And an example of this is a medical algorithm which wanted to identify patients with the most complex healthcare needs. The programmers, of course, with all the well intention, thought that if they could, that they could predict complex healthcare needs from how much money a patient spends under healthcare. And this causal theory, which evolves from our own lived experiences, unfortunately ignored the pre-existing social bias that in the United States there's a racial gap of access to healthcare, there's a wealth gap as well, and there's other factors of structural inequalities in the healthcare system that will, of course, bias our results and create emergent bias, which is new bias created from our model. So the thing is that algorithmic fairness was not very popular until recently. Most research in the machine learning field focuses on optimizing accuracy. And as we know, computer science is a field skewed in its demography. And the thing is that moral judgment often is related to demography traits as well as our beliefs in fairness and discrimination. Pearson in 2018 showed evidence that females felt more strongly than males to exclude the categorization of gender if that would make it less likely that women would be recommended STEM classes, even if it increases algorithms accuracy. So of course, accuracy and fairness, there's a trade-off there and how we prioritize will depend on several factors, including, of course, personality, which we will see later. So why is this important for business? Well, of course, aside from the evidence of societal harm that automated decision-making has the potential to do, business wants to avoid scandals. We want to keep the trust of our customers and our company. We want to be reliable and accountable to our stakeholders and comply with human rights laws and non-discrimination laws and, of course, avoid discrimination lawsuits based on prima facie discrimination due to AI. However, I'm going to focus on an under-research harm, which is the representational harms, as I had already gave some examples of the allocative harms, which leads to economic harm, as it, for example, continuously denying loans to women. But the representational harms focuses more on the narrative that we reinforce about a group of people. Artificial intelligence is the biggest experiment in our history, classification experiment in our history. We are defining what it means to be black, white, what it means to be a woman, a man. And I focus here on the lack of representation of different ethnicities in the digital world, in the virtual world, which can lead to societal harms. Experts at the University of Cambridge have problematized this as the whiteness in AI. And while this refers to the portrayal of artificial intelligence as white in films, stock images in AI, chat bots, virtual assistants, and they theorize that this could be as a neurocentric portrayal of intelligence or the idealized others. And we can see this also in female sex robots. So this raises the question of how will the future of race and gender develop in the subsequent autonomous stage of general AI? And because of this question that of course Wright now doesn't look very optimistic, we thought, well, we need to engage programmers into social justice. We need to find ways to motivate programmers to look at their work, to have also agency about their technology design. And so we also wanted to understand if the communicator's race and gender had an impact into this strategy. We know from different literatures, especially the literature on confronting discrimination and bias, that message transportation depends on three things. The first thing is the characteristics of the audience, the characteristics of the speakers, and also the message framing. So in our experiment we decided to manipulate the gender and race of the speaker as well as the message framing in the negative or problem framing versus a solution or a positive framing which we'll see ahead. But it's also important to know that stereotypes also influence the message reception. And stereotypes are pervasive. Research shows that low and high prejudiced people are aware of stereotypes. The difference is whether we choose to apply them in our judgment or not. But humans use social categories and that is not going away, especially with artificial intelligence. Feminists categorized by gender and high prejudice people categorized by race and others categorized by physical attractiveness and such. So it's only natural and stereotypes are also inevitable when we meet someone from an outgroup member. Unless we have a goal to be egalitarian and this goal to be egalitarian or colorblind we can say is automatically activated when we see when these persons meet someone from the outgroup member. So this stereotype activation is what we measure in our experiment as I will show you in a bit. And we do this through an implicit measure. So we want to measure real behavior and just realize how this changes across personality types. So after reviewing several literatures we noticed that there was little talk about programmers role in over common biases. Also little empirical evidence regarding interventions and bias reduction in technology design as well as black women's experiences while confronting discrimination. Since most literature and most research evidence uses black people versus white people in their experiments and intersectional theory theorizes that as black women face gender discrimination as well as racial discrimination they have a unique experience that is not validated empirically and that's what we try to do as well. And last but not least as we learned from the stereotype confrontation literature minorities receive a backlash when they talk about discrimination I will explain why and we attempted to reduce this backlash with the solution framing. We will see what happens. So what we did is an online experiment with 590 programmers in prolific. It was a two by two between subject design white middle age programmers from the United States. As I said we manipulated the race and the gender of the speaker as well as the problem framing. Both speakers actually read the same paragraph and just to summarize what the paragraph says it explains bias in AI and how vulnerable populations as LGBTQ people with disabilities and women and people of color are the ones who tend to be most affected and not in the decision table and so they both did videos where in one side there was you are part of the problem and in the other side there was the you are part of the solution and that they should seek remedies to proactively address this issue. So after looking into the sorry I cannot go back okay well I cannot go back but after looking into this after watching this video they were presented with a mock website and this is where we tried to measure the implicit bias and the real behavior and this mock website showed a white AI chat bot of course there's nothing inherently wrong with having a white AI chat bot but you would think that after seeing a video explaining bias in AI and diversity that you would at least mention it right so that's so after we show the this website we ask what would would you change about this website in an open text box and we coded zero or one for those who mentioned anything about diversity this is our conceptual model I will try to make it as digestible as possible. Our first hypothesis we expected the problem framing to be superior to the solution framing when it comes to the detection of potential bias as I explained the detection of potential bias was whether the programmer mentioned diversity or not right this is our dependent variable and this we expected this to happen because the biggest predictors of prosocial behavior is guilt fear and shame so our second this was also confirmed in our results. Our second hypothesis was that the white male speaker using the problem framing was going to be the most effective in increasing the detection of potential bias and that it would be reversed for the black women and I'm going to go slow into explaining why we expected this so when two members of the same race are having a conversation the message is increased farther more than if there were of different races generally so when race matches race message reception increases also there's a stereotype violation because it's not expected from the white man to talk about discrimination it seems it's a surprise effect it seems so non-self interest and it's also it seems non-self interest and it also reduces the threat it reduces self image threat as well as group threat meanwhile when the black speaker uses a problem framing she activates the angry black women's stereotype and instead of triggering guilt which we need to predict a prosocial behavior she triggers irritability most of the time and the fact is that the literature says that targets of discrimination cannot trigger as much guilt as those who are not targets of discrimination and we know that guilt is needed then so we continue and we further and last lastly expect that this will depend on the individual's personality type and here we use the social dominance orientation scale which is related to beliefs of a just world is also related to legitimizing inequality and let's call it levels of prejudice and we know that levels of prejudice also influences the message reception from members of stigmatized groups so we expected that individuals with low levels of prejudice will have a reversed effect a total reverse effect from individuals with high levels of prejudice and the reason why we expect this is because this high prejudice individuals who are high in social dominance orientation they have a goal to be egalitarian they have a goal to be colorblind and in experiments it has been shown that they are so used to correcting the stereotype activation over time so often that they can do this in less than 500 milliseconds from seeing at an at an environmental cue for example a black person so this is what we capture in our experiment this implicit measure so most of our hypotheses were confirmed except for the black women and the message framing i will explain more now but as you can see there's a reverse effects for low individuals and high individuals for low prejudice individuals and high prejudice individuals but let's go farther into it so as expected the white male speaker using the problem framing was the most effective in detecting this potential bias however as we see here it was not successful for the black women and doesn't matter if she uses the problem framing doesn't matter if she used the the solution framing she still gets this backlash for talking about discrimination so this can tell us something about tone policy in black women it really doesn't matter so much if she says it in a positive way if it's in a negative way it's just that well the literature also says that as is a stereotype confirmation it's expected for minorities to talk about discrimination so the message scrutiny is reduced and as it is not expected from a white man to talk about discrimination the message scrutiny then increases and and it has a better effect so we could not overcome this however in the high prejudice individuals you will see this that the black female speaker actually using the problem framing actually increases the most bias detection but actually has no difference as well if she uses the positive or the solution framing there's no difference here and this counterintuitive result is because of the mechanism that we explained of stereotype inhibition which that's why it causes us to be reversed and we are doing replications of this experiment to understand the mechanisms of exactly why this happened but what we assume is that this could be done driven by anger or or irritability and that's why they mentioned diversity the most because of this because she triggered more anger and irritability so basically the takeaways are that we need allies we cannot leave all the diversity work to people of color and I think that this is a good argument against you know toned policing of black women doesn't matter if they say it in a positive way doesn't matter if they say in a negative way just by talking about discrimination they have to pay a price and last but not least when when talking about bias and discrimination it would be useful to have a diverse pool of speakers in order to reach different personalities and different peoples in this field yeah thank you so if there's any question but for the presentation my question is are you aware about of any initiative business oriented or not that are employing such techniques to diminish bias with AI thanks yeah actually I'm not aware I'm not aware I'm aware well of yeah there are some there are some guidance into how to talk to an outgroup member audience about discrimination but I have not seen the manipulation of of the message yeah all right then okay okay thanks for the presentation small question about I'm sorry I cannot understand very well sorry and now is it better okay so when it comes to machine learning let's say the datasets which don't have enough representation of all different races and types would you say it's better to have separate datasets for each type of different racial group or whatever or do we put more so we order we include more in the same dataset while training something like that yeah that's interesting do you use any website to figure out the distribution in your data like oh I don't have any data I'm just a theoretical question okay okay sure um yeah I mean it depends but not norm I don't focus so much on on that area of distributive justice but yeah the recommended thing is that they're all represented and that you can have like an idea of how your data is distributed between races and and at least make sure that is going to be fairly accurate between all race groups in your database similar to the normal machine learning where there's different types in each dataset you have equal every class has to be equally yeah yeah there's like a whole debate on that topic like you know if you want to engage into that there's like different ways to right there's different definitions of fairness there you know no one agrees on what fairness is or should be so it's very contextual but yeah I would say that it's very contextual and depends thanks okay then thank you so much