 In the same way, we want to be taught by educators who represent us, and we want politicians who represent us. We also want technology to represent us. In every sense of represent, we want it to look like us. We want it to mirror who we are, and we want it to stand up for us, share our values, and preserve them. But that really hasn't been the case yet, and that's what I'm going to talk to you about. I'm going to talk to you about the kinds of bias that are leading to technology letting us down. I'm going to talk about why it happens, and I'm going to start to talk about what to do about it. But first, let me take you back to France in 1984. So in France in 1984, the government set up a terminology commission. It's a very French thing. They had had a whole bunch of terminology commissions before, and the goal of this one was la fémininisation des noms de profession, that is the invention of names for jobs traditionally done by men that may now or one day be done by women, like doctor, professor, researcher, post-woman, and so forth. And they released a report about the words they had invented, the neologisms that they had invented for these new jobs, like a woman doctor, or a woman male deliverer, and so forth. As the report was released, there was a counter report, and that's also very French. And this counter report was released by the Académie Francaise, who sees themselves as in charge of the French language, standing up for and ensuring that the French language remains pure. And what they said was, this very impulse of yours is honourable, but it's going to lead to barbarisms and segregation and worse sexism than we had before because it's unreflective and unmotivated by research. Well, to any researcher, that's a dream. So I decided to do an experiment to look to see who was right, was the feminisation of the terms for jobs going to lead to more women moving into those jobs, as the terminology commission suggested. And so I went to the U.S. and I went to France, and I did an experiment with children of eight or nine years old. They're starting to think about gender, and I asked them what they would call this picture. This is a female truck driver. The word's a little unfortunate. Luckily, eight and nine-year-olds don't know this. Those of you who are French speakers know that Camus knows, unfortunately, already has a meaning that we might not want to use. But in this instance, we asked them simply what they would call this person. What they would call this person, a woman researcher, chercheuse. What they would call this person, a doctoress, a woman doctor. And what they would call this person, a female male deliverer. And in fact, what I found was that it was children who scored most highly on a test of stereotype bias. Those children who had the narrowest beliefs about who could do what that used the feminized terms. And that makes no sense. You might think, why? And I asked them, why? Why would doctoress not mean a woman doctor? They explained doctoress is kind of like a doctor, but not a real doctor. That's why we call her a doctoress. And that's because language reflects life and not the other way around. And so when you want to change something, you can't simply change the words. You can't simply change the pictures. You have to change society. And that's what we're going to turn to now. So I co-founded with a number of other very smart people, a non-profit called equalai.org. And I invite you to join us online and also to get to speak to Miriam Vogel, the new executive director who's here in Davos. We realize that very well-intentioned people can do very nasty things. And we started this foundation, this non-profit with that in mind. We looked at the stats. We saw that the numbers of women going into computer science are going down and not up. That parents say that they'd like their children to be computer scientists so that they can earn more, but don't want their children to take their girl children to take computer science classes. We know that in less resourced schools, computer science isn't even taught. And why is that? And what do we do about it? Why does it matter? A parent said to me, I don't understand you. Why would you want my girl to become a computer scientist? That's a sucky profession. It's a bunch of badly dressed, non-washed, greasy-haired men eating Cheetos and drinking Red Bull and staying up all night alternating between writing code and playing video games. You see the problem here? So what I suggested was I don't want to make any girl become a computer scientist. But I want every profession to be available to girls, to people of color, to other underrepresented groups, those of different abilities, because if not, we're going to amplify the worst of ourselves in technology. We're going to amplify our ability to kill, our ability to destroy, our ability to hate, and not the best of us. And it takes intentionality and it takes work to create technology that amplifies the best of us, human-centered technology. Now Joy spoke beautifully about one aspect of that human-centered technology that has not been intentionally created. That's relied on what's called a convenience sample. My students define convenience sample as your two office mates and the two office mates across the hall. And that's not really what you want when you build a piece of technology. And the people who built those algorithms grabbed the first data set available and it was the pale males. But there are other kinds of bias, as well as algorithmic bias. There's also bias in what the workforce looks like and bias in what the technology looks like. And in all three arenas, we are not represented. Neither what we look like, what we sound like, what our values are and what we want them to be. And this all happens for a reason that is not intentional for the most part. And that is because of the psychological notion of stereotype. Now when we talk about stereotypes, usually what we mean is negative beliefs about a person. But that's not what a stereotype is in technical language. A stereotype is the ability to take in information and rather than needing to take in the huge stream of information that comes at us every second, we grab a piece of it and extrapolate. We look at an eye, for example, and we say, oh, totally. Olive-shaped eye, pseudo-ethnicity, Asian, really good at math, not so good at interpersonal relationships, won't argue in public. Now the first part of that, the Asian, that's an extrapolation from one little detail. What it allowed me to do was not look at the rest of that body or hear that person talk or have a conversation with that person, but simply to pattern match what's in the world with what's in my head. And pattern matching is a lot faster than taking in information. But as you saw, it has dangers. Because when we match a kind of person to a set of traits like good at math or not willing to argue, sometimes that's fine. There's a great old movie called Pillow Talk where Doris Day is talking to Rock Hudson and he's talking in a Southern accent. He turns out to be a con man, but she hears him speaking in his Southern American accent and she says, he's so cute, he's so sweet, so naive, so innocent, and he's not at all. But she extrapolated from that one datum to that. Miley Cyrus thought it was okay to represent herself with slanty eyes, but by doing that she extrapolated to all of those other traits. And that leads people in those negatively stereotyped groups to try and become like the norm, like Joy wearing a white mask. This is an actual product for sale to keep your eyelids looking Caucasian, non-Asian. And that's a very sad thing. It leads to even worse things than that. For example, it turns out that when the hand holding a cell phone is black, people are way, way more likely to see that phone as a gun. And this is unfortunately even more true of policemen than it is of everyday people. So you can imagine a young person grabbing a phone and being shot dead. Unfortunately has happened way too often and continues to happen. And things like that lead to saying, I'm looking for people to work on my team. I need people who are gonna succeed. People who are gonna succeed like I succeeded. I'm from a group that fits here. We need people who fit in. Only a few weeks ago a friend of mine went for an interview in Silicon Valley. And when she didn't get the job, she said, can you tell me why? And the hiring manager said, you just don't fit. Yeah, you're right. I don't wear the same size. I'm not the same height. My skin happens to be a different color. And you need that. Why do you need that? Because a bro culture, a culture where everyone looks alike, which is what Silicon looks like now, creates bro products. Our students from Carnegie Mellon come back and tell me, and this has changed over the last couple of years that their bosses tell them to create for themselves, design technology that you would love, even narrowing that field of technology. And yet we know that diversity in teams creates innovation. That is, it has been shown as definitively as we can show anything, that the more perspectives, the more different points of view, different kinds of people we have on a team, the more objective technology innovation will be created. This is not an example of that. This is an example of creating a technology that fits a stereotype. That is, Alexa is a servant. In the same way that girls, those are young women, were asked to be phone operators, because they had soft voices and gentle temperaments, and could serve those people who use the phone. Alexa does the same thing. And that's why in the UK, Siri had a male voice, because they're the male valet, or butler, was enough of a stereotype to allow men to serve. But that disappeared in the face of US Alexa, and it's now a female voice. This stereotype gets more and more noxious. Taxi drivers in Germany refuse to have a GPS with a female voice. They refuse to take instructions from her on how to drive. And an even uglier example, unintentionally for sure, comes from a paper on virtual tutors teaching children math. These are four virtual tutors, four representations. Children were allowed to choose, this is not my work, where children were allowed to choose whichever one they wanted. And they chose the one to use first that looked like them, same gender, same ethnicity. But they learned more math from the white male. And that's not surprising. If you look at the representation in these pictures, these are stereotypes of what a scientist sitting in his armchair believes black men and women and white women and white men look like. So we did an experiment. We built two versions of a piece of technology. They looked identical, but one spoke the same dialect as the children we were working with. It took us two years to build a grammar of that dialect. And the dialect is just language without an army and a navy. Two versions of that piece of technology. And children worked with the technology to do science. And it turned out that the children who worked with the technology that sounded like them learned more science. So this has real world consequences that we have to pay attention to. So what do we do? We're at an inflection point and it's both a risk and an opportunity. And this is a 14 minute talk and not 14 hours. And I'm happy to talk to anyone who wants to know more. But the inflection point is that the future of work is not going to be like the past of work. We have to grab hold of that. We have to have inclusion and diversity officers on the team that digitizes the company, that builds the platforms, the performance algorithms, the metrics and the policies that govern change. It's an opportunity because all of our companies and our universities are in the middle of change. And women, people of color, people of different abilities need to be at every stage of that change. Not just women engineers, but women product designers, women marketers. Unless we do that, we won't have a diverse group like this. And I want to say something that doesn't get said often enough. We need to invest in the pipeline. We need to make it okay for girls to be engineers without having to have greasy hair like Cheetos or drink Red Bull. It has to be okay. But the pipeline is leaking just as badly at the top. Senior women, when they get to senior positions, are told that they're difficult. They're hard to manage. They're just not right. And they don't stay in those positions because they're kicked out. So you can't just hire women. You need to keep them. And to do that, you need what's called the cohort effect. Not one, but a minimum of three. Not one person of color, but a minimum of three. For anyone who's seen the intern, not one old man, but a minimum of three. And if you do that, if you have older women and younger women, then you'll have role models for people to look up to. You'll have cohorts to talk to one another. So we've talked about two kinds of bias. And here's the third. One is the representation of us in the workforce. The second is the representation of us in the technology that we use, such as Syria and Alexa. And the third is algorithmic bias. If we pay attention to this, we can have a workforce that looks like the people that we're building for. And all of us win if that happens. Thank you.