 Good afternoon, everyone. Thanks for coming to our webinar today. My name is Shobita Parthasarathy, and I'm professor in the Ford School of Public Policy and director of the Science, Technology, and Public Policy program here at University of Michigan. I'm also the director of the Technology Assessment Project, which is the reason that we're all here today to discuss the publication of our recent report, Cameras in the Classroom, Facial Recognition Technology in Schools. Before we begin, I want to address the ongoing strike by University of Michigan's Graduate Employees Union that began about 10 days ago. These graduate students and others are striking because they believe that the university's pandemic reopening plans are insufficient and dangerous because they want the university to cut off its relationship with the Ann Arbor Police Department and because they want the funds from the university's Division of Public Safety to be reallocated. We are sympathetic to those concerns, and we thought long and hard about whether to have the event today, but we decided to go forward because the issues that we discuss in our report about law enforcement bias against people of color and racism in surveillance technologies are aligned with GEO's concerns. Our overall aim today is to give you an introduction to the Technology Assessment Project, or TAP, as we call it, and our methods in analyzing facial recognition and other technologies to summarize our findings a bit, and then to talk about our conclusions and policy recommendations. We will have time for questions at the end, so please put your questions in the chat and we will get to them. TAP is part of the University of Michigan's Science, Technology, and Public Policy Program. Founded in 2006, STPP is a unique research, education, and policy engagement center concerned with cutting-edge questions that arise at the intersection of technology, science, society, and policy. Housed in the Ford School of Public Policy, we have a vibrant graduate certificate program, postdoctoral fellowship program, public and policy engagement activities, and a lecture series. Our graduate certificate program is the jewel of our work. We teach students about how values society and politics shape technology and science, and how science and technology shape our world. Students also learn about the science and technology policy-making environment and how to engage in it. Our alumni now hold jobs across government, the private sector, in non-governmental organizations, academia, and think tanks. Our program is unique both because of the profoundly interdisciplinary approach we take to teaching students about the relationships between technology, science, society, and policy, but also because of the interdisciplinarity of the students who participate. STPP engages students from across the university, from the School of Information to the Business School. 26% of our students come from engineering, for example. And in recent years, given the growing interests in the social, ethical, and equity implications of emerging technologies, our program has been growing. At present, we have 73 students. STPP launched the technology assessment project in the fall of 2019 because we live in a world increasingly driven by technology, and this technologically driven world is producing increased unease. As citizens, we are aware more and more of how technology is shaping our lives, and we're also just starting to see how it has disproportionate impacts, how it tends to not just reflect, but even reinforce inequalities, for example. At the same time, policy makers are often flummoxed about how to manage emerging technologies. They worry that they can't properly anticipate the consequences in order to regulate them properly. It seems like technological development moves so quickly, how can there be adequate time for policy discussion and legislative activity? On what basis should they be making decisions? It's because of these questions that we wanted to do something. But the idea of technology assessment as a general idea isn't new. Scholars and policy makers have used a variety of techniques to try to anticipate the social, ethical, and environmental implications of emerging technologies and use this analysis to inform their governance. What we're trying to do here is something a little bit different. We're developing what I call an analogical case study method in some using historical examples to inform our analysis of emerging technologies. I'm a social scientist myself, and I use these kinds of historical case study methods in my own research, so it's familiar to me. The basic idea of TAP, though, is that the implications of emerging technologies are much more predictable than we tend to think. We can actually learn from the history of technology to anticipate the future. If we look at previous technologies, we can understand social patterns in how technologies tend to be built, implemented, and governed, and the kinds of problems that arise. And if we understand those things, then we can do a better job of anticipating those consequences. Before we get into the report, I want to introduce our brilliant research team. They're all here today, and you'll be hearing from each of them. Claire Gallaghan is a recent graduate of the Ford School of Public Policy and now an associate at Kaufman Hall in Chicago. Hannah Rosenfeld is a Master's of Public Policy student and also a Graduate Certificate student in STPP, and she has a background in tech, so she represents some of the disciplinary diversity that I mentioned earlier. I want to emphasize that an important part of the TAP is its training component. We're teaching students to develop skills in this evolving analogical case study method to analyze emerging technologies and to develop key interdisciplinary research and writing skills. It's important to note that Claire and Hannah did much of the case development and writing. And of course, as the TAP report was released, they've had the opportunity to learn how to disseminate reports like these, how to engage with the media, how to write op-eds, and other kinds of commentaries for public audiences, and of course, give presentations. Finally, Dr. Molly Keiman, who is STPP's program manager and truly Jill of all trades, who's also an expert in educational technologies and therefore was an invaluable part of the research team. And we have a number of people from the Ford School helping us today as well, and I want to thank them too. So why facial recognition and why schools? In so many ways, the idea of using a digitized image of your face to identify or verify your identity seems like the stuff of science fiction, but it's increasingly used today from China to Europe to the United States. It's used in a variety of settings, including most notably and famously for surveillance and security purposes in law enforcement, but also even for identity verification in our smartphones. Here in Detroit, we have Project Greenlight in which businesses send footage to police and these digital images are checked against law enforcement databases. The new use on the block, so to speak, is facial recognition technology in schools. We've seen it increasingly used across the United States in an age of concern about school shootings, monitoring who comes onto school grounds via facial recognition seems like a great solution. Perhaps most notably in 2018, Lockport, New York near Niagara Falls announced that it would install a facial recognition system in its schools. The way it works is that cameras installed on school grounds would capture the faces of intruders, analyze whether they matched any persons of interest in the area, have that match confirmed by a human security person, and then if there was a match, it would be sent to district administrators who would then decide what to do. The system was approved by the New York State Department of Education last year and then it finally became operational earlier this year. Since then, it's been a subject of a lawsuit by the New York Civil Liberties Union and in July, the New York State Legislature passed a two-year moratorium on the use of biometric technology in schools. It's now awaiting Governor Cuomo's signature. So these really are live debates. At present, there are no national level laws explicitly focused on regulating facial recognition anywhere in the world. In fact, quite the opposite. Many countries are expanding their use of the technology without any regulation in place. Mexico City, for example, recently invested $10 million in putting thousands of facial recognition-enabled security cameras to monitor the public across the city. Huawei's Safe City system has been installed in 230 cities around the world, from Africa to Europe. But we did do a comprehensive policy analysis of the national and international landscape and we wanted to look for policies that might be interpreted as being related to facial recognition. And we classified both proposed and passed policies into five categories. The first two are consent and notification policies which focus on data collection and data security policies that focus on what happens to the data once it's actually been collected. A number of policies in these two categories have been passed around the world, most notably the General Data Protection Regulation or GDPR in Europe. Similar laws have been passed in India, Kenya, and also in the U.S. in Illinois and California. And European courts have found that the GDPR covers facial recognition. The third category is policies that tailor use, that is to prescribe acceptable uses of the technology. We see this in the case of Project Greenlight, for example, where the city of Detroit has limited the use of facial recognition to investigate violent crimes and also banned the real-time video footage that was part of the original approach. Fourth, we see oversight, reporting, and standard-setting policies. These mandate different ways of observing and controlling the operations of a facial recognition system, including their accuracy. Most of these have only been proposed. And then finally, we see bans and temporary moratoria on facial recognition's use. These have been proposed around the world as well as at the national level in the United States. But where we see a lot more policy activity in the United States is at the local and state level. Some states have banned law enforcement's use of facial recognition in body cameras. And a number of cities, from Somerville, Massachusetts, to most recently, Portland, have enacted bans of varying scope and strength. So, we have some policy activity, but a lot of it is in progress, and it's pretty piecemeal and non-explicitly at the national level. And this raises some questions. When we think about developing policy for facial recognition, what should we be thinking about? And how do we know what we should be thinking about? This brings us to analogical case study analysis. By analogical case comparison, we mean systematically analyzing the development, implementation, and regulation of previous technologies in order to anticipate how a new one might emerge and the challenges it will pose. So, when it comes to facial recognition, we looked first at technologies that seemed similar in both their form and their function to facial recognition. We looked, for example, at how closed circuit television and school resource officers have been used, what kinds of social, psychological, and equity implications they've had. We also, once we started to develop some ideas about the kinds of implications that facial recognition might have, we looked at other sorts of technologies that had those implications and then tried to expand our understanding in that direction as well. So, for example, as we began to realize that facial recognition might create new kinds of markets in data, we looked at markets that had been created in biological data like human tissue and genetic information and tried to understand the implications of that. We did all of this iteratively and we ended up with 11 case studies until we could clearly see five main conclusions from analyzing these case studies. Claire, you wanna take it from there? Hello, everyone, I'm Claire Galligan and I'm going to talk to you a bit today about the first three of our five findings about why facial recognition has no place in schools. First of all, facial recognition is racist and will bring racism into schools. Not only will it discriminate against students of color, but it will do so while appearing legitimate and fair because technology is so often assumed to be objective and highly accurate. You may be thinking, but isn't technology objective or at least more objective than humans? The answer to that is absolutely not. Technology does not exist in a vacuum. Facial recognition is developed by humans based on data sets compiled by humans and then used and regulated by humans. Human bias enters the process every step of the way and discriminatory racial biases are no exception. We came to this conclusion that facial recognition is racist by studying the analogical case of stop and frisk, the policy that allows police officers to stop citizens on the street based on an incredibly relaxed standard of reasonable suspicion. Stop and frisk is like facial recognition in schools in that they are both seemingly neutral, but they actually discriminate against people of color because bias enters their use. Many would argue that stop and frisk is neutral because, hey, if you're not acting suspicious, you should have nothing to worry about. However, in practice, that bore out to be incredibly untrue. Stop and frisk has been consistently proven to be waged disproportionately against black and brown citizens. Take New York City, for example. Throughout the use of this policy, people of color were stopped and frisk at far higher rates than white residents compared to both their share of the overall population and compared to the rates of crime that they actually committed. Stop and frisk was a policy that because it was susceptible to racial biases of officers, it criminalized and discriminated against people of color at high rates, even though it was supposed to be a fair and objective policy. Because facial recognition is similarly susceptible to user biases, we expected to, like stop and frisk, unfairly target children of color. Finally, in addition to facial recognition subjectivity, it has also been proven time and time again to be technically inaccurate. Facial recognition algorithms consistently show higher error rates for people of color. White and male subjects consistently enjoy the highest accuracy rates with facial recognition, while black, brown, and indigenous individuals and especially women are consistently misidentified. This will create barriers for students of color because they are more likely to be misidentified by facial recognition. They're more likely to be, say, accidentally marked absent from class, locked out of school buildings, or even flagged as an intruder in their own school. Altogether, facial recognition is racist because it is both technically inaccurate and completely inextricable from user bias. It will discriminate and create barriers against students of color at school and so we believe it should be banned. This brings us to our second finding. Facial recognition will bring state surveillance into the classroom. We expect that administrators of facial recognition at school will use this technology liberally, conditioning students to think it's normal to be constantly watched and have no right to privacy at school, an environment which is supposed to be safe and constructive. This has proven to have negative emotional effects on children. The analogical case study of the use of closed circuit television in schools led us to this conclusion. CCTV is used in most secondary schools in the United Kingdom and because facial recognition is virtually the same technology as CCTV, just with added and much more powerful capabilities, we felt this case was a perfect example of how putting facial recognition in schools will play out. This case revealed to us that when administrators are entrusted with powerful surveillance systems, it is hard to control how they use them. We call this mission creep. The use of surveillance technologies outside of their original agreed upon intent. Interviews with students at these schools revealed that though these systems were originally implemented for security purposes, they were ultimately used for behavior monitoring and control. CCTV was supposed to be used to detect school intruders, but rather was used to punish students who violated dress code or tardy or were behaving out of turn. Students reported that this use of CCTV as a quasi all seeing eye at school made them feel powerless, criminalized by their school and mistrusted. They reported that they would even change how they act or dressed at school in order to avoid punishment. And this heightened anxiety in reduced feelings of safety at school is extremely likely to degrade a child's educational quality. Because CCTV is just like facial recognition, except for the fact that facial recognition can not only surveil but also automatically identify students, we are confident that just like CCTV, administrators of facial recognition will be unable to resist the temptation to use it outside of its agreed upon purposes. And we're confident that the presence of this constant surveillance will make students feel anxious, stressed and afraid in ascending where they are supposed to feel as safe as ever. And this brings me to our third finding. Facial recognition in schools will punish non-conformity by creating barriers for students who don't fit into specific standards of acceptable appearance and behavior. As I've already touched on, facial recognition is less accurate when used on people of color. However, this is only the tip of the iceberg. Facial recognition also has higher error rates when used on students with disabilities, gender non-conforming students and children. Yes, children. Particularly problematic if this technology gets placed into K through 12 schools. We're confident that these higher error rates will mean that instituting facial recognition in schools will create barriers for students who may already be part of marginalized groups. We drew this conclusion from thinking about ADHAR, India's nationwide biometric system. This is the largest biometric system in the world, having collected fingerprints and iris and facial scans from a majority of India's over one billion citizens. Enrollment in ADHAR is required to access many public and private services, including welfare, pensions, mobile phones, financial transactions, and school and work enrollment. However, like facial recognition, ADHAR is designed in such a way that it excludes certain citizens. Specifically, citizens who cannot submit biometric data, such as manual laborers or leprosy patients who may have damaged fingerprints or eyes. This means that these individuals who are already disadvantaged are now also unable to access food rations, welfare, or pensions. Therefore, because these groups don't fit a certain acceptability standard, they face even more disadvantages in society through no fault of their own. We expect that facial recognition will replicate this in schools. Finally, we know from the CCTV example I discussed earlier that it is likely that facial recognition will be used in schools to police behavior, speech, and dress. We expect that this will police students' methods of personal expression and perhaps even get them in trouble for not conforming to an administrator's preferred appearance and behavioral standards. All together, because facial recognition is likely to malfunction on most students who are not white, cisgender, and able-bodied, we could expect facial recognition to cause already marginalized students to be incorrectly marked absent for class, prevented from checking out library books, or paying for lunch. We also expect that children will be directly or indirectly discouraged from free public expression. Overall, facial recognition systems in schools are poised to privilege some students and exclude and punish others based on characteristics either outside of their control or their chosen method of personal expression. And with that, I will turn it over to Hannah. Thanks, Claire. I'm Hannah Rosenfeld. And there's so much detail and so many great examples of everything that we're talking about today in the report. So I do really encourage all of you to read it. But for right now, I want to focus on just a few highlights for our cases for the last two major themes that we touched on. So first, let's talk about how companies commodify and profit from data. So obviously a company can't own your face, but they can own a representation of it, like the one that's created by a facial recognition algorithm. Now, a data point may not be particularly valuable on its own, but companies can create value by aggregating data, either with a lot of other data of the same type or with different information that gives it context. So individuals typically don't own any of the rights of their biometric data despite this and they often don't have a meaningful way to give and revoke consent to collect and keep that data. These new databases are vulnerable to exploitation via theft, subpoenas, and mission creating. So one example that we talk about in the report is a surveillance company called Vigilance Solutions. They started out by selling license plate scanners to private companies and they collected all of the license plate data in information from all of their customers in a cloud. They also knew where all those scanners were placed so they had that geolocation information and they were able to recognize that they could use that to build a new product that gave real time information about the location of cars around the country. They packaged that product and they sold access to it to law enforcement. The law enforcement product was so valuable that they started to actually give away their original license plate reading systems for free to get even more data. So they created a new data market around license plate data that didn't exist before and then they were strongly incentivized to expand it. We talk more about the implications of behavior like this in the report but generally we do expect that facial recognition companies operating in schools will also have a strong incentive to find secondary uses like this for student data and to get that valuable data they will try to expand their opportunities for collection as much as possible by pushing to get into as many schools as they can. Paradoxically US courts have traditionally held that individuals do not own property they don't have property ownership over their own biometric data. So in part that's because the data doesn't have a lot of value when it's considered on its own. So one case that we used to look at this in the report is the Moore versus the Regents of the University of California 1990s case. In that case, Dr. Gold of UCLA used samples from his patient John Moore to develop a valuable research cell line and he sold that profiting off of it without initially telling more about the research or potential profit. Though after he sold it, he did after he patented he did eventually notify more of this use. So when Moore sued the court determined that Dr. Gold should have told him about the research upfront but that Moore's tissue was considered discarded even when it was inside him because he couldn't really use it for anything on its own. So therefore he had no property right fluid. And following that and a number of other similar cases today generally a company or researcher cannot use biometric data without consent but they don't owe anyone a property stake and something that they develop using their data. However, meaningful consent is often limited when it comes to complex technology systems and especially in situations like schools where you can't really opt out of it. In the US in particular, students really don't get a chance to engage with any of those questions because the Family Educational Rights and Privacy Act FERPA actually allows schools to consent on behalf of their students. So consent isn't really a topic for the students who may be surveilled. This push to expand surveillance by companies is bad because it'll expand the reach of all the problems Claire already mentioned today but it also introduces some new issues of its own. First, without strong data protections this information is vulnerable to being stolen and it's impossible to replace things like fingerprints and faces. So once it's stolen, it's out there. Second, data collected for one purpose will be used in other ways often without the same level of scrutiny that the original use received. That's the mission creep that we talked about. Thinking back to vigilant technologies many of those private customers who use those initial license plate scanners may not have signed up to share that information in the cloud if they knew that they were going to be generating data that would be sold and packaged to the police. And even if a company doesn't actually package that data and profit off of law enforcement use once it's been collected the police could often subpoena that information and it's a lot cheaper and less politically difficult to subpoena it than building a similar database on their own. So that's what happened last year with ancestry.com when police subpoenaed information from their DNA database to solve a crime. It would have probably been very difficult for the police to put that database together on their own. You can easily imagine that police could subpoena facial recognition information from schools to identify a student or our trackers child's whereabouts if we're looking at the facial recognition case. Our last theme is institutionalizing inaccuracy. Again, in the report we really systematically unpack what accuracy actually means and what it doesn't mean for a technology like facial recognition. And we really talk about all of the ways that this idea is complicated for a socio-technical system. But accuracy could be a webinar in and of itself. So right now I just wanna directly tie what Claire said earlier about technologies being inextricable from the human societies that they are in. And that's what we mean by a socio-technical system directly to the questions about accuracy. And I wanna show how these questions often get overlooked leading to poorly functioning technologies becoming entrenched in daily operations. So proponents of facial recognition often answer critiques about the technologies by pointing out that these algorithms they learn. So they will get better, they'll be improved by being used. Actually, facial recognition's accuracy problem begins during the learning stage and that's early in development. So to build and test an algorithm, first you build a training dataset of facial images, the system learns how to identify faces, then you apply that algorithm to a testing set of faces where the researcher knows the identity but the program doesn't. And you see how often the algorithm gets it right. That's the accuracy measurement that you're getting from most companies. So the demographic mix in the training set is gonna determine how strong the algorithm is in various demographics. But problems can be really easily hidden if the two datasets, the training and the testing set have the same deviations and they're different from the real world population. So for example, if you don't train with any black women, you can still get your accuracy number to show high levels of accuracy in your tests if you also don't test with any black women. But when you apply that system in the real world where black women exist, you're gonna have a problem. And this is exactly what we're seeing. So the most common facial recognition data set, testing set is 77.5% male and 83.5% white. And NIST, the main US agency involved in assessing facial recognition accuracy hasn't disclosed the demographic makeup of the database that it uses to test software. So it's difficult to contextualize their reports about accuracy, but we do know that they built another database that was intended specifically to test how well facial recognition performs across racial groups and it used country of origin as a proxy for race. And it didn't include any countries that are predominantly black. So a high accuracy can also be hidden by poor performance in one group. If that's outweighed by very high performance in another group, you can begin to get the sense that it's very difficult to tell from one or two numbers that a company might provide or that a school district might have access to, how accurate is this going to be across the populations in your school? Another answer to critiques like this is that there should always be a human making final matching determinations. However, when we looked into other forensic technologies that use similar human backstops like fingerprinting, predictive policing and CCTV, it turns out that across the board, when there's uncertainty in the process, forensic examiners tend to focus on evidence that confirms their expectations. So from CCTV studies, we can get a sense of how well humans might actually perform as safeguards for facial recognition and research shows that observers who had been trained, the trained observers to identify individuals from footage made correct identifications less than 70% of the time. And that number drops even lower when observers are asked to make cross racial identifications to identify someone who's in a different race than their own. As a reminder, even if a person correctly rejects the misidentification, that student has already been exposed to additional scrutiny by administration, rendering them hyper visible for other critiques and sanctions that might have gone unnoticed. And it also opens up a new opportunity for human biases in enforcement and punishment. In a facial recognition pilot in South Wales, the police revealed that in the process of making only 450 arrests, the algorithms falsely identified over 2000 people. So we can get a sense of how big this problem is or it could be. And this is another way that technology is fundamentally part of human society. Students who are most susceptible to misidentification are also those who are the most likely to face outsize punishment if they are subject to scrutiny. Another issue is the question of who determines what level of accuracy is acceptable. When governments don't set the standards and we found in our cases that for forensic technologies they tend not to, courts end up being the main arbiter. So in the US trial judges in the federal court system and most of the state courts use a Debert standard to determine whether or not an expert witness testimony about a forensic technology is scientifically valid for the case. And that's done on a case by case basis. They consider things like potential error rates, the existence of a maintenance standards and the technologies reputation in scientific communities, but they do not consider, they do not have a minimum criteria for determining any one of these categories. So as a result, the accuracy of fingerprinting is ultimately determined in the legal system by the quality of the lawyer and the experts involved in a given case. So right now some states currently accept facial fingerprinting testimony as expert scientific witnesses, while others in the federal court system don't, suggesting that this is not wholly reliable. And it also creates essentially two separate standards of evidence, one for those with the means to mount a strong legal defense and the another for those without such means. Further law enforcement may still have incentive to use weekly support technologies if they are consistently able to get wins in court with them. So obviously there's a lot more to talk about, but what I'm getting at here is that accuracy is much more complicated than it seems at first. And these processes translate human biases into the software and into the system of the software is part of. Despite this, people tend to perceive as Claire said earlier, technology as objective and therefore inherently free from bias. So police have a long history of leveraging this idea of objectivity in court to argue that some arrests could not have been possibly been biased because they were based on an algorithmic prediction and that does tend to hold up in court. That can make these systems even more dangerous because stakeholders that might usually be sensitive to human biases can overlook similar critiques and safeguards when it comes to technology. So now I'm gonna pass it over to Molly to talk a little bit more about what we anticipate going forward. Hi, thanks Hannah, I'm Molly Kleinman. So now this brings us to our recommendations. Our essential recommendation is straightforward. Countries should ban the use of facial recognition technology in schools. So in all of our research, we were unable to identify a single use case where the potential benefits of facial recognition would outweigh the potential harms. And this is true for the kinds of in-person facial recognition systems that we were mostly considering in our research, but it is also true for the kinds of uses that are expanding now in the kinds of online education tools that are being used during the pandemic. So we have kids sitting in front of cameras all day and these companies must not be allowed to collect facial and other biometric data from children who have no choice but to use their products. Furthermore, again, in this COVID-19 situation, we're seeing other kinds of biometric surveillance expanding such as thermal tracking, and many of these systems have the same risks and dangers as facial recognition. And we believe that they also, it would be best if countries banned them. So in the absence of an outright ban, at a minimum, we are recommending nationwide moratoria that would be long enough to give countries time to convene advisory committees to investigate and recommend a regulatory framework. So, and when we say an advisory committee, we're talking about something that would be really interdisciplinary and it would include experts in facial recognition technology, but also in privacy, security, civil liberties laws, the social and ethical dimensions of technology, race and gender and education, and child psychology. And a moratorium should only end when the work of the committee is complete and the regulatory framework has been fully implemented. Separate from a moratorium or ban, countries should enact comprehensive data privacy and security laws that address facial recognition and other kinds of biometric data if they're not already in place. So in the EU, the GDPR does not explicitly address facial recognition, but courts in several European countries have ruled the facial recognition data is included under the GDPR's definition of personal data and therefore it is not permitted under the GDPR. We're not going to go over them all here, but in addition to these nation-level policy recommendations, our report also includes recommendations for state-level and district-level policymakers to help them provide effective oversight in the absence of a national ban or moratorium, which is the situation we find ourselves in right now. As we discussed earlier, there's very little regulation happening at the national level. So we hope that you'll take a look at the report and read these other questions. So as a part of that work, we compiled these lists of questions for individuals who might be dealing with potential implementation of facial recognition, including school administrators, teachers, parents and guardians, and students. So our goal with these lists was to help individuals ask critical questions so that they can make more informed decisions for whatever their zone of influence is, even if it just may be only a single school building. So now I think I was gonna hand things over really briefly to Shobita or Na to wrap up, and then we'll start taking questions. Yeah, sure, thanks, Molly, and thanks, Claire and Hannah, for participating in the presentation and talking about your results. So that, I think, is a pretty good place to start in terms of giving you a general sense of what the report talked about. As Hannah mentioned, in the report, we talk in much more detail, of course, about those recommendations and the different, each of these conclusions. We looked at a number of different, a number of different cases for each. So you'll find those resources in the full report, and we also have a shorter executive summary. And then finally, we have those maps that I presented available also as separate supplements. And then, as you folks are asking questions in the chat, I'll also just say that if you are interested in STPP and its work, you can visit our website, you see the URL here, and also you can follow us on Twitter. And then finally, if you want to keep up to date on what we're doing and get our newsletter, you can email us at stpp.umich.edu. So, Molly, do we have questions that we wanna... We do, they are starting to roll in. So the first question, I think this is a big one, but I think it'd be good to talk about. So someone has asked us, do you think that facial recognition should be banned across the board or just in schools? I'll start, I have, of course, I always have opinions, but Claire and Hannah, what do you guys think? I mean, based on what we learned in this, in our research, it's obviously, students are particularly vulnerable, but I can't really imagine a situation in which, in which the drawbacks of facial recognition are going to outweigh the benefits. It's going to be just as inaccurate in all these other places. It's part of the same feedback loops in all these other places. And a lot of these things, like feeling surveilled and avoidance, a lot of the cases that we looked at, we drew on cases that weren't just in schools. We learned a lot from people out in society and adults. And so a lot of these learnings, I think map pretty directly onto a lot of other scenarios. One thing, I mean, so I would generally say that I agree with that. What I would just add, I think, is that, of course, in some ways, this is one, of course, as someone who thinks about case comparison, right? Facial recognition use in schools is one case. It's a hard case, I think. And I agree that there's a lot of vulnerability among children that has to, therefore, I think the use of the technology in that context has to meet a really high threshold. But I also think that, you know, as I said, as Molly said, we've offered a lot of recommendations and they're pretty detailed. And I think that for perhaps use cases that are more complex, you know, use in law enforcement, more broadly, for example, or even identity verification, we need to, we've provided those resources that I think can be useful for those uses as well. So, you know, I think, as Hannah said, this idea that humans are part of the technology and we have to address that in some way, the idea of data markets. I mean, these are things that we all have to be thinking about in a really serious way, regardless of the use of facial recognition. And my concern is that we're not thinking about those sorts of things enough, regardless of the use. And you see that even most recently in the context of Project Greenlight in Detroit, which of course is not the use in schools, but a black man was misidentified because his image was captured and it was linked to some old driver's license photo. So that kind of problem that we're identifying, I think, is something that we see across facial recognition and uses. All right, so the next question I have. Given that private and public sector advocates of this technology are likely to use the charge of inaccuracy and bias as leverage to highlight how the tech industry is improving and training, or improving and training the human backstabs. Sorry, let's try this again. Given that advocates of the technology are likely to use this charge of inaccuracy as a way to highlight how the technology's accuracy is improving and how training for the human backstabs is improving and diversifying, the arguments against using the technology are being addressed and will soon be less relevant. So why ban its use outright? So I'm gonna take the director's privilege and say that I think that Hannah did a great job of answering that question. Which is that the fundamental point that we're trying to make is that it is impossible to reduce the inaccuracy in this technology. We cannot tech our way out of this. There is no tech without humans and society, and that introduces systematic bias, structural bias, and individual bias. So I'm just using different words than what Hannah said. So I just don't think that that's an argument. I hope that our report provides some details to explain why that argument isn't adequate, but that's a sort of top line conclusion that I think I draw anyway. So this is a question that's actually related to that one. So where do we think the attitude of technology's objectivity comes from? And is there any sort of shift that we're maybe seeing generationally in this belief that technology is objective? I'm seeing lots of nods. Yeah, and Claire, did you want to say something? Cause I jumped in last time, so. I mean, that's a great question. I definitely couldn't tell you where the attitude comes from. I mean, I think there's a belief in science, second public policy field. We talk a lot about the black box of science innovation that people assume what's done by scientists in a lab is perfect. I mean, you assume if it's informed by science and research is not likely to be inaccurate. So I think it comes partly from that black box fallacy. And this is also what STPB argues for that we need to get ahead of this and regulate these things and not just blindly trust that they're accurate. But I mean, I don't know exactly where it comes from and is it changing generationally? I think you're talking to not the most representative group to answer that, cause we would all say yes, but that's because we talk to STPB scholars all day. But I would like to think yes. I mean, obviously generations are just getting more and more technologically entrenched. And that makes me think that with further use, I mean, people use facial recognition every single day. And I think people, there is a better understanding that the technology is not perfect. I do wanna say that I think that some groups have always seen this as not objective. And so it's not always news to every group. I think in general, a lot of people, especially scientists in tech, when I was in tech, I think, I spent a lot of time in Silicon Valley and I still talk about my research now with people. And I do really always find myself, I very frequently still find myself bringing up, like having to bring up that idea that tech is not objective. So I don't see there being a huge generational shift at this point, but I do wanna point out that some groups who have often been harmed by things that are called objective have always known that it's not objective. And so the idea that we're just now learning this is really like we're just now able to elevate this using it academically. And not just now, this has a history and ST in science, technology and society studies. And we're just now seeing some of this get elevated in a more mainstream way. So I think that there's actually a lot of power in the idea that your discipline is objective, the idea that if we can build a new facial recognition system or if we can build a bailbound algorithm, the idea that it's objective gives us a lot of power to not only avoid some questions, but also it gives you kind of economically a lot of power to get funding for projects like that. There's a lot of power that goes along with that objectivity. So I think that that doesn't mean it's intentional that people come into, that people are trying to build up that myth of objectivity, but I think it's a product of over time. There's a lot of funding that's condensed around that. I'm calling this next question for myself. What do we make of tech that uses facial recognition not in the name of security or law enforcement, but more for specific teaching and learning contexts that are often driven by faculty demands such as online exam proctoring. And I don't like those either. I think anytime you have a situation where you're treating your students as probably trying to cheat or as probably criminals, you're disrupting the relationship with your students. That's not a pedagogically sound way to approach education. And I would argue that if you need this kind of policing technology in order to assess your students, you need to think about assessing your students differently. And I realized that's a lot easier said than done and that there are disciplines where it can be difficult to always come up with the kinds of assessments that would work in an entirely remote situation. And of course now we're dealing with a humongous expansion of remote education that people weren't prepared for and these tools can seem really convenient and like it's gonna make life easier in a time when nothing in life is easy. But I wanna think about for whom is it making life easier? And I wanna center the students experience when those kinds of technologies are being used. I don't know if others wanted to add to that. No, I think you, I mean, no, I'll just, I guess I'll just briefly say that where we are, facial recognition technology is already being used in schools across the US pre-pandemic. And then as you said yourself, it's only expanding now and it's expanding at colleges and universities too. And I think the things that seem like seductive solutions, there are actually other kinds of problems potentially that we need to be thinking about. Exactly. So this next question says, I'm from a place overseas where the government is beginning to roll out smart sensors capable of being used for facial recognition. And there has already been resistance locally. Are there any simple to understand resources that we could recommend that can be used to spread awareness of the issues raised? I mean, so our report is a great place to start. There are some questions at the very end that are tailored for individuals facing these kinds of deployments. And it's focused again more on the kinds of questions that they can ask and less specifically about advocacy, but I think that it's a pretty short step from some of those questions. Yeah, I would just add at the end of our report too. So we have the questions that Molly talked about that can sort of equip hopefully articulate some of the kinds of questions that people might ask. We also have, I believe it's on the website, if it's not currently, we'll be sure to add it. We have a one pager that's sort of a one page summary of the five conclusions that might be good, just a one pager to kind of distribute in terms of resources. And then finally, at the end, before the references, somewhere in that report, I know it's lengthy, but all of the sections are hyperlinked. At the end of that report, we, Claire, gathered a number of resources. So think tank reports, but also advocacy groups, others who are doing work in this area. And so that might be a useful resource for you to find out. And we tried our best, I do international research. So I kept pushing everybody to make sure that we were thinking about these things internationally. And so there are some international resources there that might be useful for those folks around the world. So a number, this is another question, a number of technocratic solutions are focused on diversifying datasets to better identify black people, which unfortunately represents a form of predatory inclusion. Are there strategies that policymakers and advocates can use to highlight the racist aspects of many of these surveillance technologies? Hannah, you wanna take that one? Strategies that we can use. That stuff, I did see a lot of that. I mean, I know that while we were doing this research last year, there was a big story that came out about Google paid a company, it had this critique about not being good at identifying black faces and it paid a company to go out into Atlanta and give homeless people $5 to use a, supposedly use some kind of phone game that they were actually doing was capturing their faces with a very complicated consent that really they were not explaining. And so obviously if the response to the criticism of your being racist is to go out and do something like that, that can almost be, it's hard to say what's worse, it's very bad, it's also a bad outcome. I'm not sure what I would suggest as a solution. I think one of the things to understand is if we can get this idea out there that those kinds of simple like adding more people to your database is not sufficient to fix this problem that it would take a societal change to fix this because these technologies are part of, they're part of society and they're totally inextricable from that. When you start to see those kinds of actions you can very clearly say that company is not ready to make a not racist technology. So I think that I don't have a specific solution should be the way that you do but there are, but certainly I think a better understanding generally of how that technology does not and will never stand on its own may be helpful in setting up the success in advocacy battles like that. So I am often accused of being a pessimist when it comes to technologies but I'm gonna try my hand at being an optimist. And also maybe kind of self congratulatory a little bit. But I think this is why when you see the report we talk about a ban and then we also offer policy recommendations. It's because on the one hand, our instinct is to say there's something deeply wrongs systematically and socially that can't be fixed really easily. And so we really need to think about stopping this full stop. But we also understand what the real world looks like and so we wanna address it. And so when you think about a question like this and I have to say I'm gonna write down predatory inclusion because I think that phrase is really evocative. But I think that my initial instinct is to say we can use the kind of analogical case study approach that we're trying to pioneer here. And as I was thinking through this I thought to myself about two kinds of examples. So one is the infrastructure for human subjects research. It is certainly not perfect. It has a lot of problems but when you think about the inclusion the sort of dealing with exclusion by some sort of forced inclusion or preferential inclusion or the creation of messed up incentives well, we have a human subjects research system that has evolved that has certain kinds of institutional buy-in that we can take as something that has worked in society and maybe we can innovate beyond that, right? So maybe that's at least one place to start if it's not a perfect solution. One of the ways that people have found that not to be a great solution is the fact that it relies on individual consent and not community consent. So then there have been in recent years methods for trying to get community consent when it comes to biomedical research. So that's another place where we might say, okay, we can think about how might we get community consent when it comes to issues like trying to diversify data sets. Of course, again, the diversification of data sets is not gonna solve this problem, but if that's what we're thinking about we can use these kind of analogical cases. And again, I also wanna emphasize one of the things that I'm trying to do when I think about this is to look across tech sectors and to look historically. That is key because I think too often we're siloed in different areas of technology when we actually can learn across them. And one of my hopes is that in this kind of effort that we can sort of start to look at examples that might look like they're really different but actually can teach us some interesting things. And I think we're at time. So we have some questions we didn't, we weren't able to get to today. We might see if we can find a way to follow up with folks to answer those questions. Please feel free to reach out to us and get in touch. Our contact information is all in the report. Should we, I don't know if you had any last words. Thank you all for coming. And as Molly said, you know where to find us if you have further questions. Thanks everyone.