 Hey folks, welcome back to the conference auditorium. Our next speaker is Janie Walter. She's currently an intern with Red Hat and has been working on user interface development. Her talk today is on trans-inclusive development from the ground up. Please welcome our speaker. Hi, my name's Janie Walter. I use she, her pronouns. I'm going to be talking about trans-inclusive development for the ground up. So first, a little bit about who I am. I'm a junior at Cornell. I study computer science and gender studies. I'm a front end developer currently working at Red Hat as an intern. And I'm a community organizer. And I'm in need of a new head shot. If you came to my lightning talk, I'm going to be reusing the same jokes, so just laugh anyways. So a few things about what this talk is not and what this talk is. This talk is not going to be a primer on trans-inclusive language. I may use some terms you're not strictly familiar with. I promise you it's going to be OK. Google is free. What this talk is going to be is just a few suggestions for things you should consider when designing and developing your own applications. So I'm going to start with a question, which is who is considered? Who is considered in the applications that we make? So the first thing that you learn in any design 101 class is that design is all around us. Every door handle, every trash can, every piece of software, and every piece of technology in the broadest sense has been designed, has had hours and hours of labor put into it to think about how people are going to use the things that we use. So the question for this talk is what happens when a designer doesn't consider you and your needs in designing a product? So this is a video that went viral about two years back of the racist soap dispenser. If you haven't seen this video, I'll recap it really quickly. But basically, it's a soap dispenser that when a white person puts their hand underneath the soap dispenser, they get soap totally fine. When a black person puts their hand underneath, nothing happens. It just doesn't detect that they're there. And I don't actually know what the technology is behind this, why this glitch happens. But I do know that it shows a failure on multiple levels of design and consideration. The designers and the testers just didn't consider people who had darker skin tones. And as a result, they ended up with a flawed product. So this lack of consideration can have dire consequences in a lot of cases. So these are two technologies being used right now in airports across the US. On the right, that's the right side, is one of those body scanners, the millimeter wave scanners. If you've been in an airport recently, you've probably seen these. It's the one you go in, you put your arms up for a few seconds, it scans you, and then you walk out. And any abnormalities get marked in bright yellow on the screen and then get patted down. What is actually happening when you step into one of these booths is that the TSA agent on the other side is looking at you up and down and deciding whether they think you are a man or whether you are a woman. The scanner then scans you and takes this scan of your body and then compares it to one of two models that it has in its memory. This is incredibly problematic for a lot of trans people because often our bodies do not match those perfect models that are in there. Our bodies are simply not considered in the design of the software. And this doesn't just go for trans people, lots of people. Anyone whose bodies do not exactly match these two perfect models are going to be marked, are going to be considered threats, are going to be pulled aside, are going to be targeted potentially for harassment and might miss their flights. All just because they weren't considered in the design of this piece of technology. On the left is a different piece of software as a facial recognition. I think these are not very common yet, but are probably going to be very common soon. But the idea is that instead of having a boarding pass at your gate, it just scans your face and decides if the right person has shown up. And facial recognition software is a little bit different than the body scanners, because rather than having these two discrete models that it is checking against, it is instead using machine learning. It has this neural network that it has learned and trained on a set of faces. But facial recognition software still fails to account for trans people because often we aren't present in those sets or we aren't present in great enough numbers. And so our faces are not considered the same way, right? Our faces are not recognized the same way and aren't recognized correctly, which can lead to some serious issues. So this brings me to the conclusion that algorithms are never neutral. Any neural network is always going to be biased by the guidance it is given and the set that it is trained on. And so algorithms will always hold the biases and considerations of its designers. This is why it is integral to recognize those biases and considerations when you're making the software that you're making. So as engineers, you may be thinking, well, okay, how can we fix these systems, right? Okay, maybe we need to have better models or make the models looser or have more models there in the system itself or maybe we need to have more trans people in these data sets that we're training this facial recognition software. But in real life, that isn't what happens. Instead, the question becomes, how do we make the people fit our system? If we go back to the case of the TSA, what trans people basically are recommended to do is declare to the TSA officers our trans status, basically walk up and say, hello, I am trans and I'm gonna go set off your machine. This is not a perfect solution, obviously. When we're traveling, we're also, if you've had a name change, if you've had a gender marker change, we're basically, we're suggested that we carry huge stacks of documentation of our lives, of our bureaucratic and legal lives because if those things become questioned, it can be incredibly harmful for us. It can put us in a terrible situation. It can lead us to us being harassed. It can lead to us being arrested. So basically what happens is that those on the margins are forced to go to great lengths to render themselves readable by the systems that we make or risk being labeled as threats. So the question becomes then, is inclusivity actually enough? Is it enough to actually reform these broken systems? I think facial recognition is a great example of this. Should we be training facial recognition software on sets that include trans people, that include trans faces? Is the result more inclusive? Maybe, maybe you'll get a system that's slightly better at recognizing trans faces at labeling us correctly, at understanding our faces and how we work. But is the result actually more ethical? To answer this question, I'm just gonna show you five headlines for in the past month. Amazon says the facial recognition tech itself to cops can now detect fear. Amazon is the invisible backbone of ICE's immigration crackdown. Amazon Ring has partnered with over 200 police departments. Amazon has asked police to advertise Ring and Amazon is teaching police how to get Ring footage without a warrant. Trans-inclusive facial recognition software is still facial recognition software. It still is going to have the same risks of being used for mass surveillance. It is still going to have the same risks of privacy abuse. It just also now is privacy abuse that recognizes trans people. But I don't know if that's a good thing, right? Because it also might be able to now single out trans people. So this leads me to this concept of this hierarchy of solutions, basically. That when these systems, when these systems that have discrete answers and discrete bias sorting things, there are three possible answers you can go with. There's one, which is the machine legible solution. There's one, which is the inclusive solution. And then there's one that's maybe the ethical solution. I think we can look at, I'm sorry, I'm just dumping on Amazon a lot today, but hey, get yourself in order. Amazon, this is from last year, Amazon Scrap's secret AI recruiting tool that showed bias against women. They had a hiring algorithm that ended up basically learning to discriminate against women in tech. Part of the reason for this was that it found that certain verbs that were commonly found on male engineers' resume, such as executed or captured, ended up being weighted more heavily than some that might be on women engineers. So there are three possible solutions that you can look at for this problem. The first is to train women to put more masculine verbs on their resumes. This is basically not changing the system at all, but changing the people who are applying, right? This is, but I don't think this is a good solution. I just don't. Then maybe there's the inclusive solution, which is make a better hiring algorithm, right? Which is make an algorithm that isn't biased against women, and it's like, okay, yes, I think that if you can do that, that's a great thing to do, but I also don't know if I at least will ever trust that you can do that to a certain extent, right? I do not know that you can completely remove biases from any algorithm. I just, I think that it's a bigger ask than people maybe think. So maybe then we have to lead ourselves to the ethical conclusion, which is, hey, maybe we just shouldn't have an algorithm deciding who we hire. Maybe we can never put that much faith into the machines and tools that we're making, and maybe we should be looking for better solutions than that. So this leads us, again, to this question of, maybe the question is not how can we make this ethical, but rather, should we be making this at all? And I think that this is a tough thing for us as engineers to wrap our minds around, right? This idea that our tools are limited, right? And that sometimes our tools aren't the best fit for the job, but I think that it's one that we have to consider going forward if we want to be making systems that are ethical, that are inclusive, that are good for the world. So what you can do, these are just like things you can take out of this room that you can go and you can think about in your daily lives when you get back to doing your software development or whatever it is you do. First, always make sure that you consider your blind spots. You should always be questioning the inherent assumptions behind your design. I promise you, there are assumptions about the world, about the people who are using your software that might not be true. You should always be on every level questioning every step of your process, saying, is this actually true about the world? Is this actually true about your users? The next thing is to, again, consider your impact. You have to think what is this software going to be used for? What are the risks behind this software? And is it worth the risk to develop? The next thing is hire us, hire trans people, hire people of color, hire marginalized people. Because if we aren't actually in the room, we can't have a say in what happens. We can't have a say in these systems. We can't spot these glaring blind spots. That's my LinkedIn at the bottom. And more than just hiring us, include us. And make sure that everyone feels safe on your teams. Ensure that everyone's voices are heard and respected, and always pay attention to who is talking and who is not. So these are just some links. These are basically my sources. But if you are interested in reading more about these, I highly suggest these articles. They're very well thought out and well written. But yeah, that's my talk. Thank you so much. I don't have a nice question slide, but I imagine that's where we're going next. Yeah, so we can start taking questions. As I've been saying at every talk, if you have a question, I'd like you to raise your hand. If possible, try to be near an aisle so that I can get the microphone directly to you. We'd like to pick up all the questions in the recording. Thank you. So you mentioned that if facial recognition software can recognize trans people, then it can single them out. But if facial recognition software can't recognize trans people, can it still single them out by virtue of the fact that it can't recognize them? I mean, there may be some noise in the system like someone who had facial reconstruction surgery or something might also not be recognizable, but that's a very small level of human input that would be required to still single out trans people. Yeah, certainly. I mean, I think that's still a risk, obviously, which shows that, hey, maybe we shouldn't be making satial recognition software. Like I think that brings back to the question of like, are these systems inherently flawed? Are these systems worth the risk of making them? Right. Hi, so in the same vein of whether or not certain technology should be made ethically, that brings up the question of like, for a lot of things like facial recognition technology, it's not like the majority of people will really get a say on whether things continue to develop. And I guess my question is like, in the wake of, for example, I don't know, Amazon maybe just making such technology for recognizing trans facial features and whatnot, what do you think is the best approach other than not developing certain technologies? Well, I think that, I mean, for one thing, I'm speaking to a room of software engineers, right? And that is like, we have the power, we do have the power to actually have a say in what technology gets developed. You can see that right now at Amazon, there was a huge walk out of Amazon workers right now protesting their connection to ICE. There were the walkouts last year at Google over the company's drone project, right? We do actually have power here to decide what these companies are making, even if it feels sometimes like we are powerless, we aren't. We can still take action, we can still protest, we can still do things to make an impact if we work together. Thank you. Do you or do any of these resources have links to kind of concepts that should be kept in mind? So for example, most software has been written by cis men and so the default is that names are immutable, right? Yeah, yeah. It affects lots of different communities. So are there other lists of items like that that don't assume that names are immutable? Yeah, so the first link on here, Transinclusive Design by Aaron White is a great, is a great article. I found it really, really comprehensive. I ended up cutting, I had some more of that stuff and then I cut it because I thought the algorithm stuff was more interesting. But yeah, I think that there are lots of things. So Transinclusive Design by Aaron White, there is another which is things programmers assume about names, which aren't true, which is another great article. It's just this list of just names are not necessarily one word, names change over people's lifetimes. Just all these assumptions that you always have to be bringing in mind. I think user data is a huge spot for trans people. I know we've had problems at Red Hat in terms of trans people changing their names on their emails and things like that. So just always keeping those considerations in mind is of the utmost importance if you're working with user data. All the way in the back. Hi, thank you so much for giving this talk. I was recently reading a book called Algorithms of Oppression by Safia Umogenobo. And I just wanted to ask you about your take on one of her ideas, if that's... Okay, yeah, sure. Yeah, so briefly what she mentioned in one part of a book was saying that like there's this idea that a lot of people buy into that computers make better decisions than humans do. And she posits that she doesn't think it's a coincidence that when marginalized people are finally given the opportunity to participate in limited spheres of decision making that computers are simultaneously celebrated as a more optimal choice for making social decisions. I know it's kind of I guess a pessimistic view of it, but I'm wondering what your take on that is. In terms of just like that algorithms can be unbiased, is that the general idea? Sorry, I just missed, I think I missed part of that. Well, I guess her idea on it is that algorithms kind of limit the decision-making choices that humans have to make. I think that that's very true. I think that like, again, it's this idea that algorithms can be unbiased. That algorithms are less biased than people when it's like, okay, well, who's making the algorithms? I think that this is a huge part of what I do in my work in gender studies, which is recognizing systems that are there that we all take for granted. That there are rules that are in place in our society that we do not recognize, and that those rules are going to be embedded and entombed within the algorithms that we make inherently. It's the TSA. It's that, whoop, all the way back. It's that the idea here that is entombed is that there's only two types of people, men and women, and men always have penises and women don't. This is the only deaf cough talk in which you're gonna hear the word penises, which maybe makes it the best one. But I would agree with that in terms of like, I don't know if inherently algorithms have a way of just somehow escaping that system, of escaping those rules without the work put into it by the designers and by the developers. Does that make sense? Yeah. If there are no more questions, it's just about wraps it up. Thank you so much for coming to my talk. I hope you have learned something and have something to think about. Thank you.