 Hi, everyone. Welcome to the Eclectic Film Series committee in partnership with Digital Matters has gathered together today with our expert panelists to discuss the film, Coded Bias. This film explores MIT researcher, Joy Boulombeenis, startling discovery that facial recognition does not see dark skin faces accurately and her journey to push for the first ever legislation in the U.S. to govern against bias in the algorithms that impact us all. If you were unable to watch this film before today's discussion, I'm going to show my screen and show you where you can look up that film. So if you go to lib.utah.edu, and then in that search bar there, search for Coded Bias, bias, bias, pardon. And then over here on the left is the results from the catalog. If you click on Coded Bias there, it shows that we do have a physical copy available, but we also have online access. So you can just click right there to watch the film. If you're interested in this film and other films, if you go to the library webpage, there's an entry right there for databases. You can search by type, like videos, or you can do by subject. And so we're going to film and media arts. And this shows all the databases that we have access to. One thing that's really popular is Digitalia, film on demand, which is kind of like PBS things, and another one that's really popular is Swank. So we have a lot of databases and really kind of focuses on your, what your preferences and what your interests are, but there are also some fun, like cinematic ones that you can do as well. So thank you so much for letting me share those library resources. And I hope you take advantage of it. Let me turn the time over to Rebecca Cummings. She's going to be our facilitator for this discussion. She's the interim director for digital matters, Rebecca. Great. Thank you so much, Angela. And thanks to the Eclectic Film Committee for partnering with Digital Matters to host this discussion of Coded Bias. I'm so excited for it today. So we are fortunate to have some wonderful panelists to help us make sense of this incredibly like complicated and alarming documentary film for those of you who have who have watched it. Some of us were able to gather a couple weeks ago in the library and watch Coded Bias together. I'm sure some of you have watched it on your own, maybe more than once. And it's possible that some of you have not yet watched the film and so I appreciate Angela showing us how to access that film, which is available in perpetual streaming through the Marriott Library. But for those of you who haven't seen it, like Angela said, the film follows the work of Joy Bulamwini, who is an MIT researcher, who stumbles across the fact that facial recognition software couldn't identify her face and couldn't identify darker faces and women's faces as readily as it was able to identify white male faces. And that was simply because of the training data sets that the technology had access to that taught the technology what a face actually is. And then the film after that kind of takes us on a journey of how artificial intelligence and machine learning are increasingly becoming gatekeepers for a wide variety of issues. A lot of everything that might go from what communities are being surveilled to who has access to high quality health insurance or healthcare to who's being hired or considered for particular jobs. The stretch of decisions being made in automated fashion was really concerning to me as we watched this video. So, so I recently watched a different panel discussion about coded bias where Van Jones referred to the film as a four alarm fire. I thought that was a really accurate description of this film he compared it to Al Gore's an inconvenient truth that came out you know 15 years or so and really, really heightened awareness of climate change and what an issue that is for us. And that's what coded bias is doing for us in regards to artificial intelligence the decisions that's making surveillance and a whole host of other issues. So that being said, I'm now going to introduce our panel who's going to help us wade through some of these complicated issues. We are so fortunate today to have David row, who is a professor of English at the University of Utah. We have Sarah Sinwell, who is a associate professor of film and media arts at the University of Utah. And we also have Trevor Smith, who's a recently graduated master's student in communication, whose final master's thesis was on critical approaches to generative artificial intelligence. And before we jump into the questions I would just love to hear from our panel, maybe for a minute or two for each of you on what brought you to issues around artificial intelligence machine learning, maybe the ethical implications of some of these technologies So, David, let's go ahead and start with you. Yeah, thanks everybody. Thanks for the invitation to participate in the panel. So I've always been interested in the guardrails for technology, as much as I'm an advocate and has sort of an optimistic streak about the potential for technology in all, you know, all facets of our lives. I am also cautious and skeptical at times, but I think this is just kind of undeserved. Sometimes well spring of goodwill that we have towards technology without really understanding how I could go sideways in many ways without the proper kind of regulatory oversight. And so this film kind of dovetails with all those sort of ethical concerns I've had for a long time. That's great and I am happy to hear that there is a streak of optimism in there because I'm not going to lie leaving the film I felt a little bit despondent. Sarah, do you mind giving us a little introduction. So I saw this film at Sundance Sundance did a special screening online of this, and then they had a online Q&A with the director. And I always jump at any opportunity to see a Q&A with the director of a film so I've actually seen this film multiple times and I would have seen anything that Sundance would present with regard to who who's coming afterwards to do a Q&A but this other film interested me because I'm interested in media technology and these kind of issues at larger issues. But one of the things I thought was especially interesting about this film is it starts with a graduate students research. And I just find that as I teach graduate students at the U myself and I just find the possibilities of what a graduate researcher can discover and also I was really moved when I was watching this again to think about how actual she's being made she's being, you know, giving. She's talking to the Congress about changes to how we impact this use of AI so I really was kind of struck by the idea that it starts with a graduate student and that we can continue this discussion and they interview of course lots of experts in the field that that I've read in a few minutes about some of those people's books I've already was familiar with and now I'm seeing them in the documentary so yeah. That's true it's a very empowering position there that a graduate student could affect all this change. Exactly. I love that. Trevor, can we hear from you. I'd love to talk. I kind of originally got into this particular area of research is kind of embarrassing to say but through my love of science fiction, which I think is often for me kind of turns into this morbid fascination with things that simultaneously, you know, terrify me but I also kind of obsess over because I think they're so fascinating and for me, I was definitely that especially you know AI that generates new content which is kind of the subject of my research and then and secondarily, you know it wasn't really originally going to be the primary research I did during my masters but after receiving the fellowship from digital matters I kind of turn into my number one thing so I had digital matters to think as well. That is so great to hear. So I have some questions kind of queued up for the panel but we would love to hear from people who are joining us on Facebook live. Go ahead and comment on Facebook live if you have some questions and I think Jordan will feed those over to us in the zoom chat. And so yeah we'd love to hear from you as well but I will start with a few questions and I'm going to ask different panelists to kind of lead off. But that's just a jumping off point I'd love to hear from each of the panelists on these questions if you if you have something to contribute. So I will start off by directing the first question to Sarah though. So, coded bias as you were just mentioning interviews a dynamic group of scholars and advocates who are mostly women of color on the subject of artificial intelligence. What information or interviews stood out to you while you were watching the film, and why do you think they resonated with you. So, like I said I, it definitely resonated with me that it starts with a graduate student and how much political change she's creating as part of this documentary and in terms of, you know us policy even she's impacting policy changes which I always, we always think about like how do you actually create change not just think about how you can change AI but actually create change and there's such evidence of that in this film. But the other thing that was really interesting to me is, and I think this is also relevant to a lot of the work that digital matters is doing, because we've had discussions of for example Sophia know both book algorithms of oppression we've had discussions of weapons of math destruction where we as a group have been thinking about these issues even before the documentary came out. And I personally work on Twitter so a lot of these scholars also talk about Twitter and Google searches and things like that. And I one thing that was really interesting to me is I was already following some of these people on Twitter like in Noble for example and also to Fetski who wrote the book Twitter Twitter and tear gas. And I was following them pre pandemic, and now during the pandemic. There's kind of been like a resurgence of interest I think and these kinds of technology conversations because as, as we are right here we're all on zoom or we're really kind of so informed by our technology now that I think there's kind of been a resurgence of interest and like how impacted we are by technology and what kind of role that has in our everyday lives and I think one of a lot of interviews kind of spoke to the question of a the things we don't notice about technology right like how how our Facebook or our Twitter or any of those accounts are feeding us information and feeding us advertisements and feeding us, you know all sorts of things that we don't even think about as we think about how we use our technology. And then again the larger issues that the film deals with like things like healthcare and imprisonment all of these sorts of housing was a really big one that I was really interested in. And again how these issues that you don't think about like housing, how housing is being impacted by these technologies so I think a lot of these, these people that are being interviewed in the film I think they're really trying to think about how we can directly create change and how this impacts, you know marginalized people's in particular because those are the people that they're trying out these technologies on right so yeah I'm curious to hear what other people think too. Yeah, I thought it was really interesting how in the film, there was that the housing that they were trying to use the face recognition to go into the department building, and they're like well, why, why does our apartment need to be like Fort Knox why does it need to be this secure. Yeah, I was so impressed with the range of expertise in the film I mean I felt like I could have curated like a reading list just off of everyone that was in the film. Another, another person that was featured was Virginia you banks who's also been, you know featured in the digital matters new media studies reading group. We read her book, automating inequality I think two years ago now. And that gave me a little bit more insight on some of the training data sets that you know, because the film you know it's 90 minutes so it's only able to touch on some of these issues which is why it's great to dig in a little deeper with all you. Hey David Trevor I could jump to the next question or did you have anything to add here. Great. So, Trevor I'm going to direct this next question to you. So, you know, Sarah just mentioned like with AI we're all sort of aware how it like generates our advertisements and stuff and some of the maybe the news feeds that we see on the internet. So, I was really struck when watching coded bias by how embedded artificial intelligence has become in our daily lives and the degree to which it affects really important decisions. You know, such as like how long someone's prison sentence might be or like what health care is available to them. Is there a particular aspect of artificial intelligence that like keeps you up at night. What about this technology do you find the most concerning because it's probably not what advertisements we're seeing on Facebook. Right, I think, you know, in this, this film mentioned it to but a lot of my research is also centered around AI that learns as it works right and perpetually uses kinds of tests to see how close it is getting to actual human decision making. I think the thing that that scares me the most is when is the idea that these AI could already be getting good enough to pass a turn test right or to be indiscernible from human decision making, because I think that brings up a ton of questions as far as how we value them, the jobs we put them in the ethics of them and then I come from a family of lawyers so I can't help but thinking of liability of AI decision making especially when it gets not only embedded into the decisions our society makes but impossible to discern that it's an AI doing it and not a human I think that really scares me keeps me up at night like you said, isn't that when we reach singularity. Yeah, there's that too yeah which or when or even just the idea that they can get so good at imitating humans that they could be more human and some elements of what that means right they could get better by whatever metric you want to use at doing what they're designed to do then then we could do it. I mean you mentioned the legal aspect of it I actually had a conversation last night with the Utah ACLU director talking about like the rollback of civil civil liberties with some of this technology which maybe we'll get into in this discussion, but it just gave me another thing that keeps me up at night that all the, the progress we've made and things like fair housing or you know, discrimination when it comes to hiring practices like AI has the potential to roll back some of those things and that really terrified me as well. I think Trevor you mentioned that you come from a family of legal scholars and lawyers and so I was wondering about this with the respect to AI like where does the liability lie right because if you have a company that produces or uses an AI for the decision making process they actually have kind of a legal and financial investment in defending the as perfect right if it makes a mistake. They care it doesn't it doesn't serve them to say oh yeah we messed up or there's some mistake here. Right and so it's, it's a weird kind of tension where the AI is supposedly kind of they're offloading about decision making onto the AI. But at the same time, like, there, there's an investment into just like saying that you know it's it's the perfect sort of intermediary and it's flawless and if you come at us with an accusation that's wrong, then we're going to defend it to the death because it's all part of our workflow and processes. Right there's it's, I think it's interesting and that's this is something I was thinking about in regards to the first question as well that I really thought that the film did a good job explaining. Successfully and effectively that it challenges kind of the Western conception that anything that's empirical anything that's automatic is therefore perfect and unarguable right but at the same time we consider it lesser than us and it's less valuable and less less human obviously, but I think the thing that freaks me out with legal stuff is is that the technologies are developing faster than come up with precedent or legislation, you know, to to regulate and to litigate. I think, you know, a really good visible example of that and even in a really not simple but relatively simple cases just self driving cars and the litigation and AI that goes into that right that even those questions of liability are complicated and we haven't even figured that out yet, and we're already talking about, you know, technologies that can send people to prison incorrectly and things like that. I've just got an audience question in the chat so I'm going to go ahead and engage the audience. This question is from Greg hatch, and Greg asks, as the relatively new field of AI technology is developed deployed researched analyzed and refined. In the panelist spot on the ethical implications of deploying imperfect technologies, particularly particularly those that have been found to amplify racial bias. How else could a potentially powerful tool like this be developed without crossing this ethical boundary. I actually have some thoughts about that as everyone's been talking. I was really struck rewatching the film about the corporatization of AI, how like the tools that they're using are made by Amazon or Google or Facebook, and how that's just a limited a limited list of organizations right they said there's nine big ones. So there's a very limited list they're all in the US and China. And the fact that you know the police or our health care systems are using technologies that are designed by corporations to sell products. I think is incredibly problematic right and I and I feel like we again we're taking all these technologies for granted, and taking these kind of companies involvement for granted but something that continues to strike me especially after rewatching the film is just this idea that that a federal organization or a state run organization or a nonprofit organization or whatever it might be, would use the technology that's not designed for with the like I think a lot of those. The people in the documentary we're talking about like the social good that they're not designed for social good they're designed to sell a product right so I think that that understanding that that we that we need to think more about who's designing this product with what is this and even if we're reusing that product like what what kind of what does whose interest does that serve and I think that question of like, why marginalized communities are so greatly impacted by these technologies as part of that conversation. And I'm not sure that's the conversation happening at Facebook or Google or Amazon right. I thought it was interesting in the film to to see how it's being deployed in China versus the United States. You know that you know that China it is more of like a government you know deployed and researched operation and here it's more commercial based. I just thought that it was fascinating in real time we can kind of see these two case studies and the various issues with both it doesn't seem like either one is like certainly optimizing privacy or personal liberties or anything like that. I think I want to answer. Oh sorry really quick. I think, often, the wrong answer this question, I would say is Facebook's internal motto of move fast and break things I think we can all agree that's like the wrong attitude right with developing, but that still sometimes feels like that's what's happening I mean, Rebecca, you've read my, my paper, it's, it's shocking that you know even in the time it's taken me to graduate four months ago there's new case studies that I'm like, oh I really should have talked about that. I'm not going to rewrite my whole paper, but that's just just to reflect like the speed of the development I think is also just really shocking and concerning, especially, it just doesn't feel like something could happen so fast with appropriate ethical considerations place. That's true. Yeah, I think guard reels was the good term you used earlier we need some of those to make sure that we're doing this in a way that, you know, retain some of the things that we value in a democratic society. I actually wanted to add to that if you don't mind. So this is actually not a new dynamic at all, employing deploying technologies that are an extension of existing corporate or state power that negatively harms black and brown communities. So think about, you know, the infrastructure building of the interstate highway system is no accident that a lot of the highways just cut through black and brown communities and divided those communities and brought a lot of destruction. And we hopefully learn from that process and now when we have these huge governmental works projects or infrastructure building projects we have a whole system in place where those communities can give feedback and make sure that they are represented. And that when we do build out these things that the, you know, all the all the interests are able to communicate. But with AI right now because there aren't any guard wheels or formal regulations there's no feedback mechanism. Right, we just have to rely on the good will of Amazon or Facebook or whoever to take those considerations into account. They have no responsibility due to they have no legal responsibility do so some right now. And so we need sort of an intervention or so that that mechanism is in place for those interests to be represented. Yeah. So we have one audience question but I'm going to jump to one other question on mine and then go back to the audience question and David, since we're going on this path I'm going to direct this one to you okay. Incoded bias, Joy Bulamwini shows how the biases and inaccuracies in artificial intelligence can cause all kinds of problems, such as wrongfully accusing people of crimes which there's a great example of that in the film where a 14 year old was accused. And I mean that I think the statistics are like 85% that you know that people are wrongfully identified. But it's also possible to imagine a future where facial recognition technology improves dramatically and is nearly perfect at identifying faces, but there might be significant concerns there as well. So I believe someone in the film referred to this potential future is optimizing oppression. Can you speak to the concerns surrounding surveillance and civil liberties in a world of perfect facial recognition. Yeah, I mean that that's, I mean that we kind of those tales into what we were just talking about how if that technology, and I do, I'm actually skeptical about this idea of perfect facial recognition but if hypothetically it came to be. We would need some kind of legal framework for making sure that power isn't abused right by the state or by criminal interest by whoever whoever has a hand their hands on the technology. And this, it's actually part of a larger discussion that has preceded AI and the rise of AI. I mean, even when CCTVs are starting to crop up all over England, there was a discussion about surveillance and how penetrating and invasive it all is with AI on top of all that it can become more powerful, more robust, right. And that goes to this larger argument that has been in legal circles about how, you know, you'll hear from the outside like the CIA, whoever will say, if you get nothing to hide. Why bother you, right if you're a good citizen, why, why do you, why do you care if we have your, your face on your face in our database. And it really flips the, that's a really perversion of our principle of the right to be left alone, at least here in the United States right. It's not up to me to prove that I'm, you know, an upstanding citizen it's up to you to prove that I am worthy of surveillance right. But if everyone is being surveilled, that changes your behavior, regardless of whether you're criminal or just a normal person walking around the street, if you are aware that every single interaction that you have in public or even private is being recorded or being surveilled, then you start to kind of second yes yourself right you act like you are being surveilled, right. And with AI, there is no private space anymore what are you doing your computer, you know what are you doing at home, you know that can penetrate those those you know, conventional physical barriers right. And so that's an erosion of privacy that goes beyond what you know the framers of the Constitution ever imagined right and so what are the legal frameworks that we have in place or can we put in place to kind of build those rights back outward. Yeah, I mean and this is why I love having a literature scholar on the panel to because I feel like there's this whole body of literature that explores these possible futures where we all live under surveillance and I think if we're familiar with that those works that it should give us pause of how much, you know, entree we allow the government to have in our every aspect of our lives. Okay, thank you David. Let's see. So we have again we have. It's so great to have all this audience participation. I'm going to jump back to an audience question and then maybe we'll go back to one of mine. So Eliana Massey who's one of our current digital matters interns asks. And you can tell she's a philosophy major here. Do you think crowdsourcing should be used for ontology engineering and artificial intelligence. I think I can answer this to a smaller scale. I think you know I'm not super I haven't done a ton of work with like the ontology of AI itself but I think with with my research about AI making art. I think what I think most people are like it's a really cool idea to crowdsource all of humanity to make human like AI, but I feel like so frequently. While that noble that aim is is noble. The the sourcing that we do is inevitably going to be biased. I think in the case of my research it's always pulled from just like the vast archives of the internet right which to us as like Western people were like yes everything that ever exists all human thoughts on the internet right but really it's it's not. And even the stuff that is isn't. It's not like, I guess, organized in a way that's that's fair or unbiased and so I think it's a really cool idea, but really hard and practice to do in a way that isn't just replicating the already kind of pro Western elements of at least the internet if we're doing the crowdsourcing there. I can speak to that a little bit, because I think what the question is driving is what is sort of the way forward where AI can be built ethically and speaks to the problem of the black box right. A lot of AI technology machine learning algorithms are proprietary and so we don't know actually how it works, except for the companies and the engineers are whichever company had built them out. And so by opening up and making open source and having the entire community be able to read and understand how they work and then they can see the flaws, the gaps, and that will force the companies or whoever wants to contribute to an amateur working in this spare time to be able to make it more robust or, you know, fill those holes again but that that should go toward to larger, you know, justice oriented principles that's kind of a different discussion. I mean, while we're starting starting to talk about like how the sausage is made and how this technology gets developed I have a couple questions around that. So, Joyce work again sort of challenges the idea that technology is this neutral decision maker and immune to human bias. Can can someone talk and this is to anyone on the panel who feels comfortable addressing this but can you talk about how human bias gets embedded in code, and what steps if any we can take to mitigate that bias. I mean, I will just say Trevor gave a great talk and digital matters a couple semesters ago, and we had an undergraduate CS student who actually made the comment. Data can't be racist and then all you know everybody and then like, it led to a great discussion but I think there is this idea that like data can't be but we've also seen obviously that it is so who can speak to that. Yeah, I think just really quick again I think that the idea of the positive impact of representation and media has has been challenged recently as as maybe not as effective or as important as as we previously thought as critical scholars but I think that Sophia noble especially in algorithms oppression makes a really good case for why representation is important and developing code. And that's like, I don't know if that's like a perfect solution but I think it's a really good one and maybe the best we have. But I think, yeah representation in in in coding is extremely important maybe more even than in just media representation, just because it. It's an important impact shown by a documentary. Correct me if I'm wrong but didn't the documentary say like something like 14% of the coding is made by women and people of color something like that so I feel like that's definitely part of the conversation right is that the people are doing the are being, you know, are being impacted in different ways and people who are not doing coding so I think that's that's definitely an important factor to keep in mind I think that again like what's striking about the documentary is how all these women and people of color are being interviewed and the way they're engaging with things like Google search algorithms or any of these sorts of technologies I think is really an important part of the discussion as well. So that was something that resonated with me early in the film that if, if someone that looked like joy weren't testing the software. We may never have known that facial recognition software was was so terrible at identifying darker faces and female faces and it was only through someone who was involved in development that that bias was able to be identified. I'm sorry that just speaks to the importance of when these technologies are developed from the ground level even in the planning stage you need a diverse workforce. So all those interests are represented as my, my son, just turned two and a couple months ago he had to have an exam, and the nurse came in with her little device and said are we going to take this exam just fair this morning. This exam actually fails usually when it's tested on Asian Americans, because the company that made it didn't bother to test it on Asian people. And so, you know, it might give false positives or you know have inactive readings about, you know, the state of his eyes and so that again this is sort of tangible and that's that's that's not with AI so every aspect of technology you got to have that. Consideration testing in mind or else you know, people are going to fall through the cracks. Yeah, that's a great example. And Sarah you had mentioned the 14% of technologists are women. I mean I think an important aspect of AI is that the premise of it is using historical data to predict the future. We couldn't make changes in there and do improvements so when the AI algorithm in this in the movie was deciding who was qualified to work as a programmer. They were eliminating every single female applicant, because they were using historical data to say what would be true in the future. I just thought that was something we should be more aware of. I'm thinking of the moment in the documentary when they're the one she's giving her statement to Congress. I'm just fascinated by the idea that they could film that moment, first of all. So I'm fascinated by that but secondly, I was really struck by how the people in Congress were entirely unaware of all of these things so that that really was striking to think about. I think one of the, they were describing, they were getting some information from driver's license information and it's from like half of the states in America and I was thinking about which states, right, like which states and how are people of color and women represented in those states right and, and just the concept of that I think was really fascinating in the film as well. Yeah, absolutely. I mean I think I heard a statistic that 110 Americans are in like our faces are in a facial recognition system already. So I mean it's just, it's so prevalent already. We do have another audience questions that I'm going to direct towards the panel. Steve asks, many of the algorithms discussed in the film came under fire because of how they perpetuate existing biases and stereotypes, but it seems like AI could also be programmed to do the opposite. Are there cases where this type of technology is being used to encourage rather than stagnate or reverse positive social change. I think to I, when I started my project I was maybe differently than David I was I was extremely critical of AI and I didn't really plan on adding any nuance to that take throughout my whole paper, but luckily one of my committee members was Dr. And she's referred as to who was a fellow here as well. And she introduced me to, you know, Indigenous epistemologies, and how they regard AI. And I think that that was kind of like a Eureka moment for me when I read some of those pieces to think about that even I was in being hyper critical of AI and almost like belittling it as less than human. How I could kind of improve my research by considering different ways to be in a relationship with the non human, other than just the Western classic humans are the pinnacle of existence and creation so I can send in the chapters there's a piece I love that is it's a collation of different Indigenous groups perspectives about relationality with AI and and how how they're different epistemologies might treat AI differently than than Western epistemology so I can include that but I think a lot of the the work that Indigenous scholars are doing with AI and technology is just really really really inspiring and amazing. Like Trevor just throughout a librarian challenge like if anyone can go find that article and drop it in the Facebook chat. It's like one of those things where the book is blue can you go find it for me. Oh my God, there it is. Yeah, if you're interested, that's a great piece for my favorites. Great. Thank you Trevor. And that does make me feel a little bit hopeful to like just again just because things have been problematic in the past doesn't mean they can't be improved in the future ways that we think about technology and deploy that technology. And I think it helps to keep in mind the limitations of technology. Because because the general office doesn't really understand how AI works, they, like the film says ascribe kind of magical properties to it. It just seems like this wondrous, you know, magician and up in the sky doing things. But when you understand how it operates and you can see it has severe limitations and it gets things wrong all the time. And keeping that in mind can then reframe our thinking about technology and AI, so that we use it as a tool supplementary complimentary tool rather than this kind of end up your solution to all things right. I'm going to give another historical example about previous analog technology that had some of the same problems right like photography, we can start to be objective to this thing that just kind of captures the world as it is. All the film stock and the chemical processing went through the manufacturing number all attuned to capturing white skin and white people on film never really calibrated to black and brown people. And so you had this whole generation of photographers who just never knew how to understand or never knew how to kind of compensate for that. And then once those limitations are to get into popular consciousness and discussion happened around it then they could start to see those limitations and correct them. So it's kind of a push and pull between the technological conversation and the social cultural conversation that will hopefully align align those interests and have that technology work for the for the better. It's so great to be reminded that these issues are not new issues. They're just being embedded in different new technologies. Okay, which this has been sort of touched on but I'm hoping we can go a little deeper with this because I think this is going to be one of the big, you know, a pushback someone might give with the film. So or pushback to surveillance in general. So it's not difficult to imagine a person who learns about artificial intelligence and facial recognition technology. And is like, Well, I don't have anything to hide so I don't have a problem with my face being in a database if it helps law enforcement, for example, catch more bad guys. And maybe it makes the world a little bit safer. How would our panel respond to that. If you were talking to that person, what would you say to them. This is a person who is all for surveillance technologies and no limits. You know, maybe they don't know a lot about surveillance technology but you know just with the passing understanding of it saying well I don't mind if my face is in a database you know if it helps us catch more criminals make the world safer. I'll give you an example. Yeah. So I'll show up but just give you a quick example. But there's recently a case where a father, father says it scares the crap out of me, who had this son had come down with some kind of illness, and the US corresponding doctor over, you know, my chart or whatever. And the doctor said, and he said, Oh, my son's penis is really like inflamed. And the doctor said, Okay, we'll send us pictures of that area. And, you know, we'll take a look at it and corresponded so he did. And he was using, he had Google photos on his phone and it automatically uploaded to his Google photos archive, and Google's algorithms flag them as a pedophile and say oh you're trafficking your pedophile. And so they shut down all of his accounts he couldn't access any of his work accounts or his home accounts is completely, you know, separated from his digital life. And he had no idea how to like rectify the issue there's no like real process for correcting this. It's but the algorithm had made its judgment and so he was totally screwed. He had no access to it. There's no appeals process and nothing you can do. And that's the case of the algorithm falsely, you know, accusing somebody of a crime that you didn't commit. Because he was doing what he thought it was right. And there's no recourse right and so the question then it's okay is that one person having his whole life blown up because of this algorithm this faceless algorithm that you can't ever spot speak back is that worth, you know, you're, you know, the loss of those rights and it can happen to him, it can happen to you, it can happen to anybody, is that worth, you know, this hypothetical solution or are we catching more criminals. I mean, I'm, I'm the thing I would push back on this person would say this got to be a better balance truck you can't just have it, you know, completely one way or the other. Yeah, I think I'd say in the nicest way possible and as a privileged person, just to like reevaluate how your privilege relates, or influences your relationship with law enforcement and government. Which is simple, but that's, you know, I think, you know, as, as especially as a white person. I continually have to do that, especially when I might well maybe wouldn't be that bad which I don't really think. But you know it's I think it's important just to always reassess how our identity is giving privilege, especially as it relates to, you know, law enforcement and politics and things like that. There's actually another really good book was not mentioned in this film there was several in the film but it's called the right you have the right to remain innocent. And it talks about your sixth amendment rights and how, because there's so many laws that are on the books, you may not I mean we all sort of break laws every day. And we're just not even aware of it. And you know it's sort of you're doing it in a way where it's not harming anybody, we just don't get caught. But when you have this perfect surveillance system. It's sort of interesting what you might end up getting, you know, that allows police to intrude in our lives in ways that that previously the law protected us from. I mean, one that comes in mind with facial recognition is that there are several states that allow undocumented immigrants to have a driver's license that's perfectly legal in the state that they live in. And yet we're also aware that different government agencies are using the facial recognition databases, you know, gathering these photographs of people doing legal things, but they could use it to commit them of different crimes and I think that's where I start feeling really nervous about, you know, and it would scare me to think that an undocumented immigrant might not take advantage of their legal ability to drive in their state because they don't want their face included and be able to be surveilled in their state. I mean, there's just so many instances like that where I feel like the unintended consequences of this technology could be really dire, especially for particular communities. One of the ideas from the film I really I've given some thought to is one of the people who was being interviewed suggested that we need an FDA for AI. And I was thinking about that just like what would that look like who would be on that committee right like all those sorts of questions. I want I would like to create an FDA for algorithmic issues, but I don't know who would be put on that committee side Safia Noble and the people that are being interviewed here. But I mean I think it's really interesting to that concept is really interesting. I mean, do we all agree I mean I agree with Sarah that there should be some kind of regulation around this technology. Does anyone have an idea of what that might look like or who might be informing that because I look at our, our, you know, a lot of our decision makers and our Congress and I don't know if they have like deep knowledge of technology or the ethical implications of technology. So what would, what would an FDA for algorithms look like. I think the precedent for media technology is as frequently self regulation which isn't a perfect solution but you know I think like with film that's been a solution, especially that might help with issues of that. Rebecca mentioned of just not knowing you know what an AI is and what it's doing if you're, you know, 78 or whatever but I think yeah I think self regulation could be an answer, especially since, you know it be asking the people who are in power and have the funds to do it and I don't know if that's just an option I don't really feel very strongly about it now that it comes to my mouth but I think that is a possibility. You know I think about this a lot because I'm a film professor and the MPAA that makes decisions about whether a film is rated R or PG 13 or PG, a it's a really old institution. So let's go to stage and be, I think it's really important to note that it is not a. We don't know who's on that committee and the process of who's on the committee and how these things are determined is invisible. And I think that is something we would certainly would not want to replicate where we decree the sort of organization for AI work because I think the fact that it's invisible the fact that we don't know who's on the committee the fact that we don't know how these ratings are determined and the fact that it has larger institutional implications so certain theaters won't show NC 17 rated film for example and that's why you don't want an NC 17 rating because suddenly you may not be able to get your film screened in all the cinemas across the United States so I think this is definitely something we're thinking about is like who's on the committee and what are the structures that are put into place to determine how these regulations would be implemented. I think. Oh, go ahead, dude. I'll just, I'm going to boost guy and say that we should tax these companies and use that funds to collect a bunch of academics and researchers and legal scholars to become that regulatory oversight committee. I like that. Yeah, I think I was going to say as well that when Rebecca first asked the question my gut reaction was like well I don't want a robot on the committee, but then. But then the more I thought about it I was like well you know I mean if these develop to a point is that just the western humans are our peak. Masters of all non human and then I was like well maybe we should have a robot on the committee, you know I mean, maybe that would be good maybe the best among AI in the future should should at least have a seat on the committee to because I mean, if, depending on you know the level of how much they're thinking for themselves and stuff like that but I don't think that's a. At first I thought that was kind of silly dimension but I think it actually probably is important at some point to think of well maybe AI should be involved in the decisions are making about it at some point if it's getting better and making decisions. So the good news is that all this conversation has started pressuring these companies to self regulate. So you see, you know, a bunch of press releases coming up from different companies saying that we're, you know, concerned about it. Thanks. Next month I'm going to meta to speak with their, you know, virtual reality teams because they're really concerned about how they're going to build out these virtual worlds in a safe way that's respectful and they want to make sure that they get the input from like a wide spectrum of academics like myself who work on recent technology. And I'm going to go because, you know, they're going to want to dine us but I'm also like a little cautious because I'm like, you know, it's if they're going to, you know, put us up at this nice hotel and and like kind of jazz us and you know, I don't know if it's going to pull over our eyes. I don't know if that's necessarily the best thing for self regulation right because they're going to like, we're going to, you know, be entranced by all their nice, you know, facilities and all the great food and all that stuff. So, you know, I don't know if it's necessarily a good idea for them to self regulate like that I think maybe there should be some kind of independence and, you know, fewer fringe benefits. And then you're going to end up on a billboard where it's like, ethicist David row gives our algorithms the thumbs up. I mean, I was planning on saving this question for last but I feel like we could talk about this for a little bit because I mean full transparency I actually tried to get researchers from the University of Utah who work in like who develop artificial intelligence to be on the panel I was unable to get anyone. But I still very strongly feel like there is a role for humanists in this space and that's why we're having this panel discussion today. So all four of us on the panel are humanists with backgrounds in literature communications philosophy film and media arts. And so my question for the panel, and David, you know kind of started leading us down this path but what role do you see for humanists in the development of technology and specifically artificial intelligence. I think I was, and I think Rebecca David were there at my, my talk and fall. It was I was really intimidated because I had had to get into a lot of really techie computer science definitions to kind of work on the conceptualization I was going with. So I was presenting it to this, you know, classroom full of like CS students I was really worried they're going to be like oh that's so wrong like the way you define this and that is like, not correct at all. And then so you know I give them kind of my conceptualization of what an algorithm is what an artificial intelligence is and then so on and so forth. They're like yeah that's great, or, you know that's we haven't really thought about and I'm like really you haven't got to define these categories of, of different technologies and how we can label them. And I think that you know that kind of conceptual work and that kind of definitional work is not only really important but also something that humanists, except I'm not saying that CS students or CS college don't but I personally love that work so I'm happy to do that part of it for them. I was telling that the end of the film ends with poetry. I forgot who it was but she starts to recite some of the poems that she had written based on her research. And that kind of speaks to the crucial imperative to have humanists involved because there is something about aesthetics or something about art. There's something about sort of broader indefinable intangible forms about what it means to be humans that has to be intersecting with this discussion. Yeah that was Joy's poem. And I also was struck rewatching it this time by that song at the end about coding I thought it was like fascinating like what kind of art we can we create about coding it's really interesting. I was also struck. I mean so joy creates this group called the algorithmic justice league which I just love that title so much because I'm imagining they're just up like superheroes and fixing all the algorithms. I, I think this idea of algorithmic bias is something again that the humanists also are thinking about so I think like, again, like just the fact that joy happened to be the person that was using that software and that's why she made all these I think that, again, this question of inclusion and EDI work that's being done in relationship. I think, like, there's so much work being done in relationship to coding with regard to that now I think like taking those things into account is really important. And we are starting to see some of these jobs emerge I mean I think we have. Yamna, who was one of our acls fellows a couple semesters ago has been hired by Twitter to be I believe like an ethicist for Twitter. For those of you who've seen like the social dilemma Tristan Harris was like a Google ethicist. I'm curious how much of these positions are. You know that the big companies are using his window dressing to like cover their multitude of sins and how much is it really incorporated into the development of the technologies and influencing the development of the technologies. And I don't know if anyone on the panel can speak to that but curious if you have any thoughts on that. I think it's telling that one of the AI researchers at Google, who came out and criticized the architecture was fired. I mean, so there are, and you know she's a black woman. So it would. It seemed like they were they valued her input up until the point where she was openly critical of the company and then when she was critical they were like, you're gone, right so you know I don't I'm skeptical overall because the structures of power still remain in the bottom lines not to humanity or society is to their shareholders. And it goes back to that question always do we try to change things from within or without I mean Tristan Harris is a great example he tried to work from within. And then now he's working for the I think it's the Center for Humane Technology working outside of, you know, the companies. So do we have any more questions from the audience I saw a comment from TJ, which made me laugh after Trevor was talking about a robot having a seat at the table, TJ said, do we need an AI to regulate influence of AI on society. Yeah, I think if we're talking about robot rights, I feel like there's a case to be made about that you know what I mean, maybe not yet but one day. So another question I have for the panel, and this is not an easy question but what is your vision for AI development in the future. How would you like to see the technology evolve. One question but a couple things. One, I think we've kind of talked about this is we need some kind of overarching regulatory framework outside of private industry. Second, we need to have that maybe that regulatory framework or some other entity be able to look at the code and see how they operate and if they won't, you know, want to protect resistors I understand but they need to have some kind of a way to take a look at the code outside of the company itself, because then there's an independence there. And third thing is going to be some kind of feedback mechanism for people who are on the wrong end of AI to be able to talk back. I think for me since most of my research is in reference to, you know, creative projects used to the AI I think the ones that make me the most excited and least scared are the ones that are our collaborations between human and non human you know human and human beings. I think Eric Hanman, you know, former fellow of DM I think his work with technology and creativity is as a great example of that and I think that just through collaboration guardrails are already somewhat in place and it's at a smaller scale that's I think more easy to control and more compelling, creatively at least. I love the mention of all of our fantastic former digital matters fellows like it just makes makes my heart so happy to think about all the great work that's come out of the lab. Let's see TJ put another comment in Facebook and TJ I appreciate you interacting with us. Large language models are now generating code in production, Google has 10,000 plus programmers that accept 2.6% of AI generated code. Okay, so we, we only have a couple minutes left if there's any other questions from the audience we might have time for like one more question. But I also just want to throw it out to the panel but I always do in the last couple minutes. Is there anything you want to talk about that we haven't had a chance to discuss on this panel in regards to the film. Can I offer one more critique of film. One thing that I was concerned about is that I think the film did a great job about centering the subjects on women, women of color, people of color in general, and they, they use China and the UK as sort of case studies comparative case studies and there are a lot of critics on the UK side of against AI but on the China side it was mostly like celebratory or it was one woman who was the young skateboarder who was kind of seen as like somebody would drink all the koolaid. And then obviously there was like some discussion about Hong Kong and how the protests are the pro democracy protests are like strongly against surveillance, but there's nobody to speak for them. And I just complete absence there. I thought that was real kind of missed opportunity. It was a great observation. Yeah, I always really appreciate and TJ brought this up a little bit instead of David. I love the discussion of labor and workers involved with as it relates to AI because I think, you know, historically we've thought that labor saving devices would create like a workers utopia and that it's not necessarily been the case. So I think you know before we get to that point before we just start outsourcing everything AI we really need to talk about what it can do to workers and poor people especially and I think that I would like to see that a little bit more in the documentary, but that's just kind of one thing I'm interested in. What about you Sarah any final thoughts on the film or the issues. I mean, I bet I've never had a chance to teach coded bias but I think about teaching it as you can imagine. And I feel like it's, I'd be really interested to show it in a classroom because I'd be really curious to see how people responded. And one of the things I would ask about is like, were you familiar with these issues before you saw the doc or are these things totally new to people like that's what I'm not sure. You know, she's made another film about tick tock called tick tock. Don't confuse it with tick tick boom, which is also a great film, tick tock boom, which was that Sundance, I believe last year. So she, so this director is like making a name for herself by critiquing all of our technologies which I think is really interesting, but I'm curious if people were already familiar with these issues and it just brings them to the four, or are they totally new to people and I don't know the answer to that. I don't know if you all know. Yeah. Yeah, I would also be curious how your students would respond because I found that, you know, different generations hold their privacy closer or less close you know and I find that like Gen Z, for example, they really never lived in a world where there's that much privacy you know in the library world, we're so protective of our patrons privacy we don't keep records on what people check out. And it's funny how the next generation that almost feels like outdated of like well I you could make, you know, recommendations to me if you if you knew what I checked out previously so I would be curious what your students would say in regards to the privacy and surveillance aspect of the film, as opposed to my intuition about it. So I think, yes, we have hit our one o'clock hour I so appreciate our panelists engaging in this rich discussion. Thank you again to Eclectic Film Committee for allowing us to, you know, to, to highlight one of the films available and streaming in the Marriott Library. I just appreciate you all being here and I hope that this conversation continues and that humanists continue to be engaged with issues of technology and artificial intelligence. Thank you all for joining us.