 Oh, boy. Canada. What doesn't it derive from a Ganyeth Gahore? It's village. As Indigenous people, we are used to our stories getting a little twisted. So listen up as we set the record straight. I'm Ganyeth Dio. Please join me as we hear from dozens of Indigenous people. Together we will decolonize our words and our minds on the Telling Our Twisted Histories podcast. You can find episodes on the CBC Listen app or wherever you get your podcasts. From The Conversation, this is Don't Call Me Resilient. I'm Vanita Srivastava. We don't have to accept the technology that we're given. We can reinvent it. We can rethink it. We need to challenge the defaults. It feels like technology like facial recognition and artificial intelligence are an inevitable part of our lives. We ask Google Nest or Alexa to find and play a song. We use our faces to unlock our phones and we share news articles on social media. I'll be honest, I feel like this technology has its upsides. Like when it can track and predict climate change or identify the rioters who storm the U.S. Capitol. But there are also a lot of downsides. Once analysts gain access to our private data, they can use that information to influence and alter our behavior and choices. And like most else, if you're marginalized in some way, the consequences are worse. Experts have been warning about the dangers of data collection for a while now, especially for Black, Indigenous and racialized people. And this year Amnesty International called for the banning of facial recognition technology, calling it a form of mass surveillance that amplifies racist policing and threatens the right to protest. So what can we do to resist this creeping culture of surveillance? Our guests today are experts in discrimination and technology. Yuan Stevens is the policy lead on technology, cybersecurity and democracy at the Ryerson Leadership Lab and a research fellow at the Center for Media, Technology and Democracy at McGill's School of Public Policy. Her work examines the impacts of technology on vulnerable populations in Canada, the U.S. and Germany. Wendy Hee Kyung Chun is the Canada 150 research chair in new media at Simon Fraser University and she leads the Digital Democracies Institute there. She's the author of several books, including Discriminating Data, which is out this fall. I've been thinking nonstop about surveillance and facial recognition for the last little while, as you can imagine. I'm not living under a rock. I know that there are significant dangers around personal data collection and yet I'm one of those complacent people, you know, I've got two kids, I've got a full-time job, I'm really busy. And I actually love social media. I put pics of my kids on there last night. So what are some of the risks of sharing my life online? Yuan, what do you think? So I think there is a lot at stake when it comes to the amount of data we're giving companies and how they can treat us and what they can do with that data once they have it. So what I do in my work is I basically look at the development of technology and I think about the ways it can be abused. One of the worst possible outcomes is that we end up in a place where companies work with governments to have this data and to access this data but to also categorize us and control us. So one of my own personal interests is how people were treated by the Stasi and by their peers in the German Democratic Republic. And I think about how different they think about their data than we do in North America because they have a history behind them of the state snooping into their lives. There's this ethnographic study by a researcher named Ulrika Neumdorf and she was able to discover that the impacts of this surveillance included things like significant impacts on their well-being, mistrust and significant trauma. If you think about what it feels like for someone to know something about you that you didn't want them to know, that is huge. You know what I'm hearing you saying is that this has implications for our health, our lives, our well-being in society. I sort of understand it on a large scale that, okay, it can result in all of these things that are troubling. But on a personal level, what are the dangers there for somebody who's like, well, I'm a law abiding citizen, so what's the problem? So I think that one thing we can do is maybe switch it a little and not say I'm a law abiding citizen, what's the problem. But ask what are the conditions under which you're a law abiding citizen. So what's really fascinating now is the example you started with. You took pictures of your kids, you put them online, what's wrong with this? What's interesting now is that publicity and surveillance are so intertwined now that it's hard to understand their difference. So in other words, when you take that picture and you put it up and you create a public persona and you engage with people, it's not simply you putting it up, but the ways in which by you doing this, what else is happening? I'm just going to use an example as old, old school in the 90s when I was an activist on campus. We knew that there were Canadian agents somewhere in our midst. We just knew that they were collecting files on us. In my head, I imagine those files to be, you know, like Manila folder files with black and white photos. So information, it just was sort of more localized. And I think now like I don't know what it's like to be an activist today. And I'm wondering about the, especially for racialized people, queer people, immigrants, refugees, how they might be extra targeted by this kind of information and surveillance. What's at stake for these communities? Yeah, I think it's a really good question of who stands to be, I think, the most targeted and harmed by the use of surveilling technology. So whether you're queer or a religious minority or person of color, or if you are protected under discrimination law, what that means is that you deserve treatment that ensures that your rights are protected in the same way that you would be if you were a dominant group. It's absolutely true that certain groups are going to be more targeted than others. So if you look at predictive policing technologies, there's certain logics inscribed in the use of those and design of those technologies that can further perpetuate realities or sort of statistical findings that existed before. So for example, if you decide to deploy police to a certain neighborhood because there are more instances of crime there, in fact, what you could be doing is finding crime more often there, primarily because you're actually sending police there more than if you were to send them to another neighborhood, for example. You're just looking more basically. Exactly. That's one of the instances in ways in which people of color, for example, and racialized people can be further subject to surveillance and further found guilty of crimes even because what you have is feedback loops. So feedback loops are a really important concept when you're looking at surveillance studies in the context of technologies. Every time I think predictive policing, I'm thinking about this dystopian movie Minority Report. So a classic example of this is the Chicago heat list, which is now no longer being used. And there they came up with, allegedly, they said, what we're doing is just coming up with a list of people most likely to be murdered or to murder somebody. And then we're going to go visit them and say, look, you better change your ways or else something bad is going to happen. Oh my God, that dystopian movie is actually real. It is real. Absolutely. And the way that they determined the people most likely to be murdered was by going to past arrest history. So if you had a co-arrest with somebody who became a homicide victim, that would be a strong indicator that you then would be involved in a homicide. Now, what's really strange about this is that, first of all, they didn't take time into consideration. So you had these people who had co-arrests from being a kid and when marijuana was illegal, smoking weed together, who had clean records being visited by police and saying, look, you have to change your ways. And since some of these people had clean records, when the police came and visited, the neighbors were like, this guy's a snitch. So crazy thing as well is that the data that went into these predictive policing models and the whole setup of the model itself came from studying mainly African-American neighborhoods in the west side of Chicago. So race and background is already there. So race didn't need to be an overt factor because it was an implicit factor. So if you think of how these programs work, they're trained using certain data and the way that they're validated is correct to say, okay, yes, it's made a proper prediction is by hiding some of that past data and then saying, okay, let's use this model. Does it predict the past correctly? So these don't actually predict the future. They're tested on the ability to break the past. So if the past is racist, these programs will only be considered to be correct, only validated as accurate if they make racist predictions. So you're caught in a system in which learning means repeating the past, which means you lose the future. So the reason why we don't want these automated systems is because all it does is automate past mistakes. So some artists did this great mockup of a machine learning program to find the white collar criminal. And it was like the fancy side of New York, blah, blah, blah. So I mean, I think that the question is, how are we understanding exactly what Yuan was talking about, which are the communities that are most police so we have the most data about. And so if the police really want to say, look, we want to be effective and we want to use our resources, then go for this empty swath of people in the suburban homes doing all sorts of stuff that are never pulled over or looked into. Not that I'm advocating that. I want to talk a little bit about Clearview because some of this became known when the story of Clearview broke in the mainstream media, that all of our data is scraped and then put into this database that is now being used for facial recognition and this database being sold to police or to law enforcement or to companies. Can you explain a little bit about that case and why it's so important in Canada, the Clearview case? Yeah, absolutely. What happened was this company, a startup that's still getting funding, tried to provide and is trying to provide its services to the general public and to the police and to governments and all kinds of entities. Clearview is a facial recognition technology company, but it's also a data scraping company. So what it does is it scrapes data from all kinds of sources, social media, websites in general, collects those, has used deep learning and machine learning technologies to analyze whose faces is whose and categorize those and then what it does is it sells the service of matching faces. Why this matters is not only is the company selling essentially face matching capabilities that it's offering, but it's scraped significant amounts of data contrary to law that would otherwise prevent the scraping of data. Now, data scraping in itself is not to be seen as criminal. I think it can be used for legitimate reasons, for example, by academic researchers, but none of this was done without our consent. We had no notice. We had no knowledge of this. You mean like Canadians, when you say we, we're talking about residents of Canada. Yeah, I think when it comes to both racism and surveillance, we do have Canadian exceptionalism and Clearview AI and it's used by the RCMP is another example to show that surveilling and the surveillance of us in Canada absolutely exists and is occurring. The reason why it matters too is because what happened was the RCMP was using Clearview AI services and conducted hundreds of searches, though it only admitted to some of those to the office of the Privacy Commissioner. And it's always about the child predators. You know, it always starts with that. And that's something that Bruce Schneier has referred to as the four horsemen of the info apocalypse, which is this idea that there is certain aberrant behavior that you want to address. And then you say, you know, I'm going to use this technology only in those situations, which could be true. All of us can get behind the idea that children should be protected. And that's, of course, I believe that too. But then what you see happening is surveillance creep and the ability to use that same technology in other situations. And that's actually in a way what's happening potentially or what could happen with Apple scanning our images before they are stored in our iCloud for, again, child sexual abuse materials. People who are concerned about how technology can be used and abused are always thinking in a sort of minority report sense. And perhaps for the good reason, we're trying to see what is the absolute worst that could happen with this. And it's because we're trying to protect all people because you know that in order to protect all people, you can't allow certain people to be treated a certain way necessarily unless there's, depending on how much trust there is in the institution. Okay. Are you basically saying my photos in my phone are also something to be worried about? Absolutely. It just gets worse and worse. We have to talk for a minute now or more than a minute, facial recognition. I know that you both look at this in your work. Can we talk a little bit about what the technology is and also how it's being used right now? Wendy? Sure. So facial recognition technology is a form of pattern recognition. And so it's the idea that somehow, and these are done through machine learning programs mainly, and they don't focus on features that make sense to us. It's not like a computer saying, oh, I remember these people's eyes. I'm going to match this eye to that eye, but rather through various algorithms. Basically, you see one face and you try to match it to another face is the basic technology. It's very problematic. It doesn't really work well. It's also very bad because the early programs were trained on publicly available faces. So you're thinking Hollywood. Now think of what a hotbed of diversity Hollywood is. Other ones are like undergraduates who will do anything for, you know, as far as I can tell, $5 or some school credit, right? So the libraries were mainly white. And so these technologies work very well with light-skinned faces and really poorly with dark-skinned faces. It's getting better, but that's not the point. The point isn't that we need to be perfect for all skin tones. But the reason why this matters so much is that think of how a self-driving cars operate, right? If they can't recognize dark people as people, then there's clear danger that's involved in this. But also because it's not refined on dark-skinned faces, and this is something that people at Georgetown have been working on a lot, is that it will miss, recognize dark faces as criminal, right? Because it doesn't have that distinction. So there was this famous example given by the ACLU where they looked at the U.S. Congress and said, you know, who amongst these are criminals? And it was disproportionately people of color that were marked as quote-unquote criminals. So basically these technologies are built on historical information, which includes historical discrimination, historical racism. And so this idea that science is neutral and technology is neutral is completely wrong. Basically the discrimination is built into the technology. Yeah, to that point, worked by Kate Robertson and Cynthia Koo at Citizen Lab has shown that we absolutely do have a bias to believe that mathematical processes are neutral. And so we'll trust technology and we'll want to sort of listen to it, so to speak, when it has a certain output. And it's because we think that this is statistics, this is maths. I don't understand how it works. It must be fine. And that's really problematic when you consider the fact that not only police but judges could also rely on essentially recommendation systems. It's probably okay that we can be recommended some TV shows and Netflix before certain recommendations be made regarding the most fundamental of our rights. That is a totally different story. I mean, so should we just completely be not using this technology at all? I absolutely think there should be certain no-go zones when it comes to the collection and particularly the processing of our data for certain outcomes. So for example, in the general data protection regulation, which is one of the most advanced and progressive data protection regulations in Europe, what is not allowed is the processing of information for automated decision-making for the purposes of profiling. On its face, what that suggests to me is that you shouldn't be allowed to, for example, collect information about faces in the public setting, perhaps are very certain circumstances, but the presumption should be that you don't collect faces and biometric information in the setting and therefore to render someone potentially criminal. And biometric information is also a really sensitive data type that I think absolutely deserves special protection. Right now in Canada, there isn't special affordances given to the protection of that kind of data. What we have is this kind of free-for-all in a way where all data is the same, but in fact different kinds of data have different levels of sensitivity and there should be enforceable regulation, I think, in Canada, saying that spelling out the kinds of data that should not be treated in certain ways and right now that doesn't exist. And Wendy, what were you going to say? I saw you were like... I completely agree with everything that Yuan has said. I want to just talk about the predictive part of this because what I would argue is that the problem is using these programs for prediction. The famous example is Amazon's hiring algorithm, which was trained on all of the hires it made. And what ended up happening is if you had women anywhere on your CV, you lost a point. How is that even possible? The technology actually docks you a point for being a woman. Yeah, so because they went by who they hired, who they didn't hire, they didn't hire women. So clearly being a woman is bad. You're not going to be a good employee. And so they ditched the program, but rather than ditching it, what if we said, thank you so much for meticulously documenting your discrimination. And we use these not for prediction, but actually as evidence of historical trends. The example I always give is global climate change models. So global climate change models gives you the most probable future given past and present behavior. But then we don't say, oh, this is great. It's going to go up two degrees. Let's make it go faster or we're offered the most probable future so we won't make that future happen. So what if we took a lot of these things which are allegedly predictive and said, okay, the heatless shows Chicago police are discriminatory. So let's make sure that the kinds of things that would be automated under this don't happen. So I think that's one thing. Take these and look at them as historical probes rather than as predictive. To offer one example of people who are doing this at USC and the Gina Davis Institute, they're using these kind of pattern recognition technologies to go through the past archive of Hollywood films and to see what kind of gender representation is there and think through what kinds of representation there have been within mainstream media. Yeah, and maybe on a more hopeful note as well, I'm aware of efforts by the Algorithmic Justice League, which looks at how people can flag issues with algorithms with respect to how they're biased. And the hope is that you can improve systems because you say this is something that should be fixed. And there are risks inherent with opening up your systems for criticism by the public. But I think it's really one important step forward to allowing people who are affected by these technologies to impact their design. That would actually give rise potentially to what was Sasha Kostanichuk called design justice. And I won't go into that depth here, but it's really the idea that there's meaningful participation of community groups in the design of technology. So just talking about the participation of groups in the creation of technology. I don't know what it's like for a protester right now on the street, but I do know that summer of 2020, we had uprisings in the United States, but also in Hong Kong and Beirut. And I know that facial recognition is not just used in North America. It's a global issue that we're talking about this idea of surveillance. And I know that both of you have talked about some of the ways that the protesters have resisted the surveillance. What caught your attention, Wendy, with some of these protests? Well, what's important is that they're very aware of how the technology works. Because again, what we started with is the ways in which publicity and surveillance are now intertwined. So it's hard to think through publicity without thinking through surveillance at the same time. And what I would argue that the protesters show us and that we need to start thinking about are public rights. Because I really think the work that is being done around privacy is important, but it's completely inadequate. And there's a thought that once you're in public, you lose all rights. You're simply exposed. You're a public figure. But increasingly, we're all public figures. And what we need to be able to do is to be in public, vulnerable and yet not attacked. And what I find really important is the ways in which people offer each other shade, either through making sure pictures are taken in a certain way or people registering that they're at a certain place in order to provide a larger or different sense of location for these technologies. And these are inadequate in terms of long term solutions. But what they bring out is if you think again of how all these recommendation engines work or how everything works, we're fundamentally intertwined with each other. Everything you get is based on what somebody else has done, which means we're fundamentally connected. So what if we took this position of being connected as a place from which to act and to act collectively and to say we need to be able to loiter in public? Because everybody should have the right to loiter. Everyone should have the right to be in public. And if we switch it this way, I think that this opens up an entirely different conversation. And more importantly, it moves privacy away from corporations get to know everything about you, but not share it to other users and think about it in a far more expansive way. You said provide shade. Is that what you said provide shade for others, for each other? Yes, literally a metaphor. And this comes from a lot of the work that Kara Keeling has done. And so she's in African-American studies and in film studies. And we have been trying to think together through this question of exposure, shade and protection. And it comes from work that she's done in analyzing slavery and in how enslaved women took care of each other and their bodies, not because they owned them, but because they didn't. Because they were outside of certain notions of privacy. So privacy, especially within the U.S., is very white. It's like the first case on New York State around privacy was the protection of Abigail Robinson, I believe, who is a white woman whose photograph was used to sell flour against her will. But while this case was going on, Nancy Green, who was anti-mima, had no rights to her image. She was completely viewed in public. And so I think if we move away from certain notions of privacy, which have never been adequate, instead think through publicity as an enabling position that isn't based on notions of certain really problematic notions of property. I think this can open things up in really productive ways. I like that the right to loiter, the right to be public, the right to be in the public. And that comes from work done by wonderful Indian feminists who wrote a book, Why Loiter, which is all about the feminists and women need and Muslim men need the right to loiter in public. I never really thought about it in that way, like the idea of loitering being a right to take up space, really. But you're saying we all should have it. I absolutely agree with Wendy that we have a system in Canada that is actually very similar to the U.S. where we prioritize privacy. In fact, it isn't just privacy that is at stake, but it's the right to control our information. The German Constitutional Court calls this informational self-determination. And that phrase to me really encompasses and cuts across a lot of these issues we're talking about today, because we're talking about privacy, we're talking about algorithmic decision-making recommendation systems. But privacy alone isn't enough to protect our rights. Right now, we have changes to Canada's privacy laws, and that doesn't go far enough. And in fact, what we need is a comprehensive approach that protects our right to informational self-determination and views us not as consumers, but views us as humans, whose human rights are at the core the most important thing to protect. And that matters because if you're out in public and if the police are using what's called an MC catcher, so something that can ping cell towers to tell where you are. Yes, it's your privacy at stake, but yes, it's also your freedom to protest. At the root of it is the right to have your information treated in the way that you want to be treated. Before we wrap up, do you have a couple of top things that you want to leave listeners with, whether things that you think individuals should be paying attention to or things that we should look at from a policy level or just observations that you think we should be making? We don't have to accept the technology that we're given. We can reinvent it, we can rethink it, we need to challenge the defaults. And secondly, technology isn't the solution to our social problems. We often frame this way because there's this belief that somehow we humans are inadequate and we can build this thing that can take care of these problems for us. It will never be, but it can be part of the solution, but only if we look at the technology closely and we realize that the technology itself is built in with these assumptions, but it's also built on studying certain populations and that maybe one way therefore to change these technologies is to revisit the populations that were so key to the building of certain presumptions. Like go back to that residential study of segregation in the United States, realize there was something more and so much more that was happening and so therefore start with everybody we touch whenever we use these technologies as a way to open up different worlds. And to add to that, I really want to encourage any listeners who care about these topics to take up space too. And this extends what you were saying, Wendy, about the right to loiter and the right in some ways to take up that space. I really want to encourage people who deploy technology, whether you're policy makers or the police to really consider what is in the public interest and part of the consideration of what's in the public interest is consideration of how your technology will impact those equality-seeking groups. So it's two-fold. It's really take up more space if you are going to be a person who's impacted by this and also keep in mind the public interest and those equality-seeking groups when you are using this technology to the detriment of those people. Lovely to speak with you both so much. Thank you very much for taking the time today to be with me. Thank you for inviting us, and it's been a wonderful conversation. Yeah, thank you so much. I'm really honored to be part of this. That's it for this episode of Don't Call Me Resilient. Thanks for listening. I'd love to know, are you as freaked out as I am after that conversation? Talk to me. I'm on Twitter at writevenita. That's at w-r-i-t-e-v-i-n-i-t-a. And don't forget to tag our producers at ConversationCA. Just use the hashtag Don't Call Me Resilient. If you'd like to read more about the creeping dangers of surveillance, go to theconversation.com slash ca. It's also where you'll find our show notes with links to stories and research connected to our conversation with Yuan Stevens and Wendy Hee Kyung Chun. Finally, if you like what you heard today, please help spread the love, tell a friend about us, or leave us a review on whatever podcast app you're listening to us on. Don't Call Me Resilient is a production of The Conversation Canada. It was made possible by a grant for journalism innovation from the Social Science and Humanities Research Council of Canada. The series is produced and hosted by me, beneath us through Vastava. Our producer is Susanna Ferrera. Our associate producer is Ibrahim Dyer. Reza Dyer is our incredibly patient sound producer, and our fabulous consulting producer is Jennifer Morose. Lisa Verano leads audience development for The Conversation Canada, and Scott White is our CEO. And if you're wondering who wrote and performed the music we use on the pod, that's the amazing Zaki Ibrahim. The track is called Something in the Water. Thanks for listening everyone, and hope you join us again. Until then, I'm Vanita, and please, don't call me resilient.