 Hello all. Hello. Thank you all for joining us today. Thank you for joining us for our panel discussion, educating future data workers about ethics and bias. I am Andrine Soli. I'm the director of the Public Interest Technology University Network. Pit you in for short. The network is currently comprised of 36 colleges and universities who have committed to helping us build out the field of public interest technology within academia. Practically that means we want to help students and faculty do the following. Create interdisciplinary and cross disciplinary curricula that allows students to assess the ethical, political and social dimensions of new technology, as well as help students to develop the skills and knowledge they need to create and design technologies that serve the public and generate public benefit. Second, we are thinking we want to develop experiential learning opportunities such as clinics, internships, externships that highlight how students or faculty can pursue pit across all sectors. We strive to answer the question, what are the career pathways that are available in pit. And third, we seek to support and recognize faculty for doing research and teaching within pit as part of their tenure process. We want to ensure that pit is not a disqualifier for faculty members who choose to engage in this work. And finally, we also hope to evaluate and share stories of best practices of pit with each other, as well as the broader pit ecosystem. This is perfectly a perfect example of the fourth goal for us really is this this conversation. Membership in the network is open to any nonprofit education educational institution within the US each fall. You can find our application at new america.org slash pit UN. Each fall, you'll see a new application posted there. Now I just want to get us excited to get us started I'm very excited about this conversation. I want to introduce you to our panelists all members of pit UN. And they are going to talk to us about what they see the as the intersection of their work with pit within pit in terms of ethics bias and open data. First up is Meredith Broussard, a data professor at the Arthur L Carter journalism Institute at New York University and research director at the NYU Alliance for public interest technology. And finally we have Kathleen comiskey professor in the psychology department and women and gender sexuality studies program at CUNY Staten Island. Here from this point forward I'll be calling her Katie. I'm Shersinger, who runs who runs Princeton interdisciplinary technology policy clinic at its center for information technology policy. Hello, my hair, and Mona Sloan adjunct professor at New York University standards school of engineering and senior research professor at the NYU Center for responsible AI. As a reminder, I want to let you know that we are live tweeting this via New America pit. And so please be sure to follow us at New America pit, which will be in the chat directly. First let's get a big let's get a sense of the big picture. Everyone. This is a question for each of y'all. And I will start with, we'll start with Meredith and we'll go around. Social scientists have been studying the historical, political, economic and ethical dimensions of technological tools for very long time. What are some of the concrete ways that you have been engaging students in seeing the impact or disparate outcomes of existing technologies or new ones on people. How do you help students understand and anticipate the risk associated with technology. Meredith let's start with you. Sure, this is a good question. One of the things I teach is data journalism, which is the practice of finding stories and numbers and using numbers to tell stories. And so, given the traditional function of the press as, as an accountability monitor. It's a, it's very easy to lead students from journalistic skepticism to the, the kind of inquiry that it requires. One of the things that journalists tell each other is that if your mother says she loves you check it out. Right so I, we need to be skeptical about claims that are made about technology technology is not automatically for the public good. One of the things that Ruha Benjamin writes is that we should assume that technology discriminates by default. Right there is bias in the world there's discrimination in the world, and all AI systems do is they reflect and magnify the world as it is. So if we use the idea as a default that the technology is discriminating by default, it allows us to more accurately pinpoint exactly why technology is going wrong, who it is biased against who the technology is not serving. And then that allows us to remediate those issues, if it's possible. Thanks Mona. How about you. How do you get that get that across fairly quickly to your students. Thanks and dream and it's just a pleasure to be on this panel with everybody. I'm very happy to be here so I'm quickly not at all. I think that's a very, it's a very long process, and it's, it's an ongoing conversation so one of the things that I like to do with my students in the community that I teach on the topic is getting to think about the kinds of assumptions that get baked into the technology so just, you know, being cute and by what Meredith said and what we're rights about assuming that there's discrimination and then we can look at what that specifically is like with different kinds of technologies different kinds of domains and in different kinds of, you know, spaces, I think though that it's really important for students to get to a place where they can really tease out the good and the bad assumptions that we very often talk about implicit bias that gets baked in and, you know, it wasn't our intention that's actually a language we also find in in policy, a fair bit. And so getting them to a point where they're able to identify what kinds of assumptions get baked into technology is key and so the way I do that as a social scientist who teaches that an engineering school is, you know, very kind of traditional like reading science scholarship but also very much engaging with the very vibrant scholarship that we are seeing on the topic that has, you know, emerged over the past 10 years but like has become really, really prominent over I'd say the last three years and so I think that that is very important the other thing and what I really like to do is get them to a place where they can relate this to their own lived experience. And we always talk about intersectionality in the tech discourse and at this point also in policy in popular media. And what that really means is looking at the lived experience of the individual as it relates to different social scientists and getting them to a point where they can do that for themselves when they look at different kinds of technologies and their applications, and most importantly, being able to sit with the tension that comes with that that, you know, with the acknowledgement that there is no easy answer often to these kinds of really thorny problems is what we're trying to do in these classes. I'm Katie, and then my hair and then I really want to get back to the idea of what does it mean to start with the premise that bias is already built in. I'd love to come back around to that. I could go for that. You know, I was just thinking about so so for my pit project. I'm at the College of Staten Island as Andrew mentioned, which is part of the City University of New York and my work is centered at an extension of our main campus which is located on the lower shore of Staten Island that was built with an intention of fulfilling our mission of equity and access to higher ed for Staten Islanders. And I've recruited a cohort of students right out of high four area high schools into CSI, and they're taking their gen ed courses. In the first year, the majority of them having a focus on pit so I'm working closely with the faculty who typically teach English composition and a civics course and art 100 and a media literacy class and and an ethics course, all in their career that have a focus on how, however, we're coming to define what public interest tech is and and we're co creating and we're co creating it. The meaning behind this with our students I myself is art isn't have worked and published in the field of mobile communication studies around as a social psychologist around how mobile devices and mobile media impact interpersonal relationships. And, you know, I think where we're headed in in this in this discussion but also in how our program is evolving at the college is thinking about the role that that students play, or whether how students in in in general or users in general can maintain a certain degree of autonomy and how they engage with technology, because I feel that how as integrated as a technology becomes into our daily lives it's very difficult to create a sense of autonomy to be able to think of as an autonomous person as to what you know what choices we have about how we engage with technology and I think that might be the starting point of how we imagine ethics. While juxtaposing it to the idea that we are completely interconnected and and that there are subtle and not so subtle ways that technology is is influencing our lives and so trying to maintain a sense of autonomy as a source of where we can start when we're talking about ethics I think is is is where we're at with with a lot of this. Thanks Andrew. It's a great question sort of how do you integrate thinking about risk into your teaching for technologists and I work with undergraduate students and so I should say I'm not a technologist I'm a lawyer. I'm sort of a humanities person who's bonded into the engineering school and working with computer scientists and what I try to do is to identify risk as not sort of an add on that comes on later that you have to worry about the social impact. After you've already done building your cool new tool. That's exactly the wrong way to go about it right you have to think about what you're doing how you creating a technology what goes into that process. It's very effective how do you measure effectiveness and my colleague who are Benjamin of courses, so the leader and thinking about this but there is a way in which you can have students really wrestle with, what is the impact of my technology. I make sure that it is not impacting people in a way that I did not anticipate it doing that, and even for them to start asking those questions and not to think about what's the cheapest most efficient algorithm to solve the problem. So it's it's asking those deeper questions first and to connect it to ultimately the project right so it isn't as a saying wasn't an add on so it isn't that your algorithm is now made worse because you now have to be fair, which is often how these trade off suppose like oh how do you make it fair well that's you know after we've done making it brilliant. But in fact, your algorithms improved and serves the needs better. If you start from the position of how do we make it fair. And so that's the project and love love to talk about it more. Thanks for that I mean I think what I what I'm really excited about is you've already begun to anticipate the question of risk as a part of the design process. And one thing that I and Katie you said oh whatever public interest technology ends up being right. For us, one of the things that I think we often hear in the pit space from students is that they want to apply their technological skills and knowledge to solving social problems. And I know that how we define a problem will go a long way to helping us define what we're solving for and how we evaluate success. Right. Um so we've just had this moment we're talking about what are the impacts. But now I want to think about sort of how do you help your students define problems that are actually within the scope of tech to solve, or even to redress. Like what does success look like. Happy to take a response from anyone who feels comfortable going there right if you look like you're starting. Yeah, I will, I will speak up. I think that this is really difficult. This is one of the things that my students struggle with most. So I come from a computer science background. I was a computer scientist before I became a journalist. I'm very accustomed to the, the kind of cycle of, okay, identify a problem, reduce the problem to the point where you can write code in order to solve it and then call it a day. Right, that's the, you know, the lean startup method that's the move fast and break things ethos that that we were sold in the, in the 90s through the odds and now we're at a point where we understand that that's not enough. But the, the move fast and break things ethos is still out there. So it's, it's hard to get students who are trained in the humanities and social sciences to use the construction that computer scientists use of okay to find the problem. Okay, refactor it narrow it down narrow it down narrow it down. But honestly it's not just hard to have humanities and social science students do this it's hard to get everybody to do this because it's hard. Right. I have a couple of mechanisms that I use. But I've also found that this, this technique of okay state the problem rewrite the problem, rewrite a different way narrow it down, actually works in writing as well. So like when you're trying to write an opinion piece for example you need a really tight, really tight thesis and if you're trying to pack too much into your thesis, then you're not going to write it as well because it's easier to read about one thing than just write about two things. So it's, it's kind of a general problem solving strategy, right, define the problem and narrow it down so that it's a problem that you can make a difference about in a reasonable period of time. And I think that people who I who want to solve problems through technology and want to solve to kind of make the world better through technology. I absolutely welcome that urge. I have a lot of that urge in me as well. I think it's, it's important to I, to take that urge and also be realistic about it to kind of say okay what are the metrics for success. What is the more holistic view of success what is it going to look like, and how long is it going to take to do this project and scale it back to what you can actually achieve and get a win at in a reasonable period of time. So let's go drill down a little bit deeper for a concrete project so my hair talk to talk to us a little bit about the internships that some of your undergraduate students are doing at Consumer Financial Protection Bureau's New York City Office of Chief Technologists, because I think that is an opportunity for us to translate some of these questions in real time, so that we can do what Mona says which is make it a part of students lived experiences. You know this is a great program that that's part of the PIT system where we have students supply from different undergraduate institutions to come spend a summer working on these issues with government agencies and so last year we had students at the CFPB and in the office and working with the Federal Trade Commission, and what this does is sort of make the problems concrete. Right, so one way to do it is to sort of have somebody tell you in the classroom what it's like, but it's quite a bit different to see it out in the classroom and say okay, I've now got at CFPB I'm evaluating the algorithm I have to evaluate. If it has a certain disparate impact, what are the tools and techniques that are available. What am I able to see what am I not able to see. I think also as a technologist for us, it's also part of creating a career pathway, so that those of you who are who are listening who are in government institutions can recognize that, you know, hiring technologists people with a technology background. They can actually add a lot of value to your work because they see the problems in the way that Mary that just described they have a different approach a different problem solving approach, and that's something that you can harness so. It's kind of a mix of trying to bring those two elements together of having the government agency recognize the value of this different way of thinking and then for the people who come from a pure technologist background to understand. There's a lot of moving parts it's not like tomorrow you can get something done but you start addressing a problem together. And just the act of doing that I think is enormously influential for the students, because they feel like hey these are things we can actually take on we can work on the responses to coven we can just work on these different projects. I think that's been very helpful for them. And Katie, I'm talking talk to us a little bit about sort of your approach with high school students because I think a big part of this mix for you has been the emphasis on being play space, and engaging the community in this process which I think Meredith hinted at and I want to come back to Mona and talk to her about a project that I know that she's been doing with a library system so talk to us about what do you mean and for play space work and how the how does the community contribute to that process. Yeah, so I was glad that Meredith mentioned humanities and social sciences because one of the things that I failed neglected to mention is that the students that recruiting into our program or those that would be choosing humanities and social science pathways that aren't our traditional computer science degree and and so we are trying to create this new way of approaching the study of technology to include voices that are more based in the humanities and social sciences and, you know, as I mentioned before like our extension of our program located on the North Shore the four high area high schools are are within can the students that live in the neighborhoods can readily access our, our place and so we are very place based and the role that technology is playing in and emphasizing technology in the work that we're doing really comes out of the embedded lived experience of each student and how you know I think there is this bit of a 10 tension as that we're always talking about which is this sort of gatekeeping of sorts that happens between how how do you access that level of of development of impact of, you know, growth in terms of the traditional, you know, tech companies and tech fields and while also holding on to societal impact questions or community impact questions and, you know, we've seen that, you know, in order for what we're doing to really have impact, those people who are sort of harnessing the power of this in terms of the, you know, the major companies that are developers around technology have to be open to really seeing and hearing the stories of those who are most impacted by the advancement of their technologies and it kind of relates to the whole notion of open data in the fact that, you know, we want the people essentially to have access to the tools and to the technology so that their presence is made known right to those who are to the decision makers so the policymakers to those who are making, you know, decisions that will impact their lives without them being a part of it so not only is our work play space but it's very participatory and and sort of taking a strength based empowerment model where we want to help the students cultivate their identities as as even if they haven't been able to pursue traditional pathways into computer science that they can still have the potential to develop marketable tech skills that can help them gain access to these you know venues that seem often closed off to them invisible to them that don't include them. Mona, can you talk to us a bit about the work that you've done with the Queens public library, and how does that relate to the intersection of public data, maybe ethics and bias at the same time. And I'd also love met love Meredith for you to think of maybe a concrete project that you're currently working on, because I'd love to I also love to see how this plays out in real time. Sure. Thank you so I'm the project that we're doing with the center for responsible way I attended with Queens public library and an organization called P2P you which does a learning circles. It's exactly that we're creating a course called we are AI, which is going to be a learning circle that gets sort of distributed first through Queens public library and then it's available subsequently globally really where people get together and learn among themselves for themselves about how these kinds of systems work what are the social implications. And I kind of really don't like the implication part as here has said it's kind of you know that they are always there it's not that the impact happens later. The politics are always there and sort of get together and think through what it means for their own lived experience so that they can build literacy capacity to make demands and local policymakers. And what I want to say in terms of locality and I'm so that this came up is that we the development as it is happening and this project is led by Eric or Julia Stoyanovich and it's been very focused on what are also the local sort of issues in Queens like what you know what does it mean actually to work with Queens public library on this as opposed to you know a library and Boston or somewhere else. And that has very much informed how we thought about doing this and I will say that the collaboration between social scientists, you know people from humanities technologists policymakers that always sounds so easy but when you actually count together in a room and you have to make something those are not easy conversations and you know for multitude of reasons one of them being we have different kinds of incentives and different kinds of languages. And I think what's helpful and what has helped with course development which was or is such an project is thinking through particular cases or really thorny questions and you know find out how we can talk to one another without like polarizing right we are pulled into these one direction or the other all the time and that that was really hard work. But it was grounding to do this with a view for what it means to do this here in New York City. I appreciate that I appreciate that because I that's a part of kind of how we talk about public interest technology is the intersection of all of these disciplines coming together but I do know that translation and that conversation is really difficult. So actually seeing it in practice is helpful to hear about Meredith can you talk a little bit about some of the maybe some projects in practice and how you've been applying some of the things that we're talking about today. Sure. Let me tell you a little bit about the PIC UN project that I'm working on. So it is the in NYU Institute for Public Interest Technology. I'm working with and Washington in order to develop a training program a two week training program for early and mid career researchers in order to train them in doing public interest technology and also spreading the spreading the word about the importance of public interest technology to their own students. Um, no, when we when we talk about early and mid career researchers. It's very interesting to specify okay are we talking about teaching the social science stuff to the people already do the tech, or are we talking about teaching the tech stuff to the people already do the social science. So we approach that in very different ways. Right. So one of the things we're doing with IP is, we are primarily speaking to social science researchers, and we are getting them up to speed on tech skills first and then integrating everything they've put energy into the more social science friendly conversations around ethics around policy around okay what are you trying to do and how can you achieve it. And then finally how do you take all of this amazing learning and then wrap it into the curriculum that you're creating for your students so that we can help work on a new generation of public interest technologists. What are you talking about and I know my hair that you teach a course on big data. And, and because this is NYC Open Data Week and this is why we're here. And we know that NYC data publishes about 3000 data sets by different city agencies, free to anyone to use. What are you talking about how do you, how do you talk about, how do you talk about big data sets with students how do you have them think through both its utility and then also its limitations. Because I think that that's something that Mona and Meredith have gotten have gotten across already that this is really difficult work to translate in a classroom setting and to make it very clear that they're going to be limitations to how you're solving problems, and what those problems are. So what do you think about that that application. Yeah, it's, it's not easy. You know, I think, I think there are two strands of thinking one, I think is to emphasize how important it is to have open data and how valuable it is for citizens oversight of government practices. And we, we, it's a real benefit, and that I know my Ed Belton who is the founder of CITP did some very early work to make sure that when data was released that we had the same standard release the data so that it was actually usable by the public that it wasn't that you had 1000 different ways of displaying the same amount of information so I want to make sure that we keep in the conversation. The power of big data as a lens for the public and and for people within government agencies to hold themselves accountable. And at the same time, you're right that the big data, especially through the government agencies can can be used to turn the lens on the public and can be used to reinforce existing biases. You have to demonstrate that through examples through lessons through case studies is sort of maybe is my background as a lawyer is to think about it in terms of case studies to look at one particular problem to say okay. If you had this problem where you had to release sensitive data, how do you think about it what the identification techniques are you thinking about. How are you really thinking through the best way in which to both preserve the public good of releasing data about your work at the same time. Not putting address the individual information for your citizens and I should say that they the trade offs are not. It's not an either or there are there are mechanisms to address both there are ways to hold people accountable and collect the data. And that does take work and it takes a lot of collaborations of the kind that Mona and Meredith have been talking about. Meredith in your book artificial on intelligence how computers misunderstand the world. You talk about the principle of unreasonable effectiveness of data, and it's deductiveness. Can you talk us through that principle and it's important to our current conversation. I want to answer about this because this is one of my favorite one of my favorite concepts. So the unreasonable effectiveness of big data is the idea that we don't really know how AI is working, but it works really well. Okay, so when you have big data, like when you have millions and millions and millions of data points. You use a very sophisticated modeling tools, and you feed in all of this data and you can predict with shocking accuracy, things like where somebody is going to click next on a webpage, or what somebody is going to want to order, along with the tortilla chips that they've just put into their into their online grocery cart. It's, it's really seductive the idea that okay, the more data we get, the better our predictions can be. And many people have taken this to an extreme and they make claims that okay, everything is going to be known soon and we're going to be able to predict everything and in the future nothing will be unknown because we'll have so much data. And it's not at all true. You can predict pretty easy things like, you know, grocery shopping, like yes people probably want to buy salsa with their tortilla chips. Like, that's, it's hardly groundbreaking, but it's amazing that you can predict that with the computer it's really cool. So we need to be sure not to get carried away by how fun it is to predict things using data. One of the things that I think is important in, in terms of public interest technology is realizing the limits of what we can do with computers, which brings us back to the issue of bias when people think that computers are omnipotent when people imagine that we're just on the brink of a digital utopia where everything is going to be computerized and it's all going to be sunshine and unicorns. That's what they stop recognizing bias, and we really need to push back against that kind of thinking because no technology is perfect because people are imperfect. Any other panelists, any reflection on based on what Meredith said, Katie. Yeah, as, as I'm so excited to be a part of this panel so thank you, New America. You know as everyone's talking, you know one of the things that keeps coming up is sort of where I started before which is, how do you imagine the individual stance in terms of how we receive or critique or have a critical view to the integration of technology in our lives where, where is the position within the developers and the big tech companies around that pushback right so of course if there's like organized, you know, public boycotts of certain products because of the public becomes aware of its impact in a negative way, you know they're that could have an influence in pushing the envelope but but that feels way more punitive than I think these ethical questions need to be so like how do we create spaces within the business of technology to allow for to be more of a moral stance or an ethical stance on the impact of technology on societies I know there's some other cultures in the world that are a little bit better at doing this than we are, but like, what does what would that require and I think you know how disruptive the notion of open has been in terms of who owns data, you know who owns the impact like who owns the consequence like this level of transparency has to also have built in a notion of accountability and then how do we close that loop like how do we you know ensure that if biases revealed or if negative impact is revealed that there's some follow through right does it require government intervention does it require you know like independent monitoring, you know I would love to explore that further. I would as well go ahead Mona. I want to give a feedback on to that and sort of kind of ask, you know, how do we also distribute sort of the notion of literacy and learning because we have, you know, my students are I they delight and surprise me every day they're so ready for these questions they do you know incredible research projects they are. And you know we've talked about this they're really ready to go into organizations and do this kind of work we have an increasingly educated public we also have policymakers for now sort of getting really into the weeds of what it means to think, you know, concretely about these thorny questions but then we also have kind of a corporate side where some of these questions are not addressed or the suppressed or they are. You know, kind of pushed aside or a siloed into you know different kinds of areas, whilst we also know that these organizations are, you know, very diverse in and of themselves so I wonder how we can actually get to a place where there's this conversation with these powerful organizations in a way that is sort of oriented towards opening up the pit framework. And, and, and again that starts with thinking about what are the different kinds of roles that are open, and I just want to point to a piece of research that I think it's really important here which is the work that Jake Matt Kauff and Emmanuel Moss at Data & Society are doing on ethics owners where they did empirical research on what are the kinds of roles in organizations that look at ethics that address, you know, they have to work with these kind of thorny questions and I think getting everybody out of table is really important here and also push organizations a little bit, which is something that we want to do with with our pit project where we will host a big career friend convention and kind of bring in organizations well to get them to understand that, you know, recruiting means not just recruiting technologists who then sort of do a little bit of social science but really the whole breadth of the capabilities that students now bring because they want to. I want to pull Mahir in here because Mahir I would love to hear what you think about the sort of policy side of this, because I think traditionally that's been the space that's left to say, you have to tackle the ethical dimensions of this labor, right. And then, and then I also want to come back to sort of how do you all break down kind of the ethical framework within your teachings of students how do you get them to a place where they feel comfortable, trying to respond to the question of like the sense of a collective good, and really digging into that but Mahir. Yeah. You know the policy world is slowly coming along to recognizing these issues. But I think, and perhaps this is helpful for all of us is that the policy world tends to think in very concrete terms about the human stories, and the effects on their constituents and some sort of narrative that connects up. What would be a harm from an algorithmic system being, say, using data in some way that we don't want it to be used, and the effect on the citizens and so I think, I think it will sort of us as researchers and academics and other people in the space that we talk about people being at the table. And that's certainly something the policymakers can understand, but then their minds are thinking about what what's next right now everyone's at the table now what do we do. What's how do we think through what are different ways of organizing this, how do I decide if a particular piece of software is biased or not biased how do I make that determination. And will you tell me, as a consumer of ethical infant, you know what is ethical what is not, what should I be looking for so that's an area I think for us to. It's a challenge for us and I think it's because these are not easy questions to answer these are hard questions as Mary was saying these are not ones in which we necessarily have consensus on what the answers are. I think that's the next step for us is to engage more concretely with specific problems and present them to policymakers to say, here are the issues, here's where it's going wrong and here are things we can do to address it. Yeah, and so I see a nod from Meredith on one thing I wanted to just place this within the context of something that you mentioned your book Meredith which is like there should be a distinction made between what's popular and what is good. Right. And, and that in some sense, even the very design or the DNA of computer systems and tech culture is built off of that constraint right the popularity of a thing is what lets it rise to the top. So I'm curious, how do you all try to operationalize thinking around the ethical constraints and and how and how it can be applied practically for your students how do you sort of make that case come more alive for your students in the classroom setting and then hopefully, hopefully how they take it to their particular Well the idea of the difference between the popular and the good is an easy sell for college students because they've just been through the rigor of high school. And in high school, if you are not a popular kid, you understand that there is a lot that is not popular but is still, you know, really important and valid and good in the world. One of the ways that social media systems are constructed is they, they promote the popular as a proxy for good because computers can autonomously determine what is good. That is something that only a person can do. So, once you start talking about it in these very concrete terms, and you realize that you can't write an algorithm that determines what is good, you have to use proxies, then it's a way of getting a okay what are the other proxies that we're using. One of the deeply unglamorous things that we do in public interest technology is we read the documentation. I, it sounds so silly, and it sounds so simple, but nobody ever does it. When you start with reading, what did the system designers say that they were doing in this system. And it's usually written down, like in black and white, what they were trying to do and what are the proxies that they're using, and then that's a really good starting place for saying okay does this work or just does not work. And Mona, any, any reflections on that. I think I see your hand at Mona. Yeah, I just want to second that because from sort of two angles. One is that we can, we can expect new regulations in Europe dropping in about April that will actually take sort of more of a case study approach and really, you know, focus on different types of cases even within sectors and industries so we will see different sort of considerations around, you know, different kinds of applications within a sector such as healthcare. So I think it's very important that we are mindful of what's sort of happening how regulators are thinking about this and how this also signal across the pond. And so that then means that we will actually probably have to cultivate more of a practice of documentation and I think moving forward mean we might want to look to, you know, library sciences and historians who have dealt with these questions and issues for a very long time and I would hope that in addition to, you know, thinking about ethics in terms of philosophy or ethics in terms of the law and justice we might also want to include these kinds of considerations into our curricular and with an anticipation of what is going on and and sort of also getting students to a point where they are comfortable doing that one of the ways in which I do that in my classes like they do their own research project and they have to go through the very painful process of, you know, identifying an interest scoping it out articulating research questions and really documenting their own work. But they end up with very solid pieces. And I think that's just good practice and we might want to think about how we can cultivate that moving forward. I mean, I see that you. Yeah, I mean, I also, you know, one of the things that I've been thinking about and talking with my students about to is that the hidden ways in which, even with open data, that we may render certain communities vulnerable in an effort to do good and you know, those kind of weird sort of ways in which we might have the good, the goodest and I don't know what might the best intentions, but then also do harm right so so in terms of like who, who do we get data from. And usually those with whom we have ready access to and those tend to also be people who have, you know, various degrees of vulnerability and and also who have a certain degree of publicness to their lives and so, you know, just being careful and having my students be mindful of, you know, even and even though my students are, you know, you know, living and are coming from those contexts. So in their real push to do good, you know, the unintentional harms that we might face with, you know, the vulnerabilities that data can having access to data can provide. And then, you know, how do we look up like how do we also require those who seem untouchable or inaccessible or privileged in some way that they're not so readily studied or readily accept accessed in terms of their coming and goings in their lives, you know, like how do we take a look up to kind of make what kind of information we have more equitable across all dimensions of our society not just maybe in the under resource areas or in, you know, you know areas that are historically targeted for surveillance. That's a really good point because I think to my hair's point earlier that it's really vital that we have access to public data that this question of who's in the public data right now who's easily accessible via public data. Do you can you all speak to who you think is sort of missing in that space and what that means. What what are we extrapolating from these data sets then what's the picture of the world that's kind of emerging. My hair. I thought you're about to start but I'll just I'll just give one anecdote and love to hear what Meredith says just, you know, in New York City right the police department just recently released complaint data about police officers and that's an example of effective in my perspective open data to promote accountability and to promote abilities for outside researchers to probe questions about bias and police practices, and it's uncomfortable it's not necessarily that easy for police officers. In my work anecdote there was a pathologist, Sarah brain who studied the LAPD police department, and she's written a book about it's a great book it just came out, and she has this example of sort of doing the right along police officer, and police officer keys in his location to send it back to headquarters, and she says why the key it in is don't you have a sophisticated way of telling where you are. You say yes we do but we turned it off the Union man data that we turn off that data collection to protect the police officer so you see that kind of turning out where the powerful don't like the data to be collected about themselves, and don't want to be held accountable, and I think there's an opportunity here to use data to hold people accountable like I was saying before. That's a great point Meredith. Okay, so I think one of the best examples of figuring out who is an isn't in the data is joyful and we need. She has a project called gender shades that she did with Timnit Gabru and Deb Raji, and the gender shades project was primarily about facial recognition. They discovered that facial recognition systems are better at recognizing light skin and dark skin they're better at recognizing men than women, and they don't at all recognize trans or non binary folks. This is a problem, and some people look at this project and say alright well the answer is let's make the data sets more diverse let's put more people of different skin tones into the training data and then the facial recognition systems will be better. And that's not the answer. Because facial recognition systems are disproportionately weaponized against vulnerable communities against communities of color. So the answer is let's not use facial recognition and let's especially not use facial recognition in policing. Right so it's not really about changing the technology it's about should we be using the technology at all. I am delighted to say in the wake of this project, several cities have abolished the use of facial recognition and policing, which is terrific. So I would also call attention to a project by AI now, where they are collecting data sets that have been used to train AI models or a popular AI models. And by collecting these data sets, you can look at the data and see who is and isn't in this data, which gives you a clue about you know who's being left out of the decision making. In terms of just general investigating algorithms. I think it's important to call attention to the work of the markup, led by Julia Angwin. Her project machine bias for ProPublica really kicked off her entire conversation about fairness in algorithms, and the markup is doing exceptional work of investigating algorithms holding decision makers accountable and building new technology in order to get more insights into the technologies we use every day. Katie, and before before you go Katie I just want to say that in about seven minutes or so we're going to open it up for questions from our audience so please do use the Q&A feature in the chat. We'd love to hear from you love to offer up resources as well. That's probably going to be my final question to the group you've already begun to do that which is sort of what are some of the projects and organizations that are advancing a thoughtful approach to bias and ethics. What do you believe are some of those essential features of their work that that makes them valuable. But Katie, go ahead. This was just saying was just maybe also think about how we could enter the conversation around when private companies partner with public agencies to who then may have this desire to weaponize, you know, seemingly benign technology or beneficial technology for purposes that may not be so that may have unfair impacts on especially community poor communities of color and so and that even so and even and again I don't want to speak out of turn. I don't want to say that I don't have naming certain companies or whatever. But you know some say for instance, a company can develop some technology that was supposed to aim at, you know, stopping child sex trafficking that can then be snowballed or a local police agencies to actually target and harass sex workers right and so so there's ways in which like we should probably also have some kind of stance or talk about the ways in which what to what degree should public agencies partner with private companies, you know, in order to sort of exploit their product or exploit the data that they collect, you know, there's, you know something locally happening with the technology that sort of combs Facebook for photos and things to then match them up with mugshots and so you know like where that role that plays in terms of the ethics I think is important for us to have some kind of accountability or knowing whether or not these private public partnerships are occurring. I think that's really valuable and one of the things I thought that's, I think that's been interesting to see emerging is this decision to not continue down a particular road. And, and that question that brings up the issue to me of sort of the design process and where does where do we insert that piece in the design process. How do you all see that as a potential roadmap for how we tackle these ethical questions and where are those those off ramps. Is this like the Frankenstein question. I think you know to my hair's point earlier ways that you don't, you're not building something and then going oh well, now I'm noticing. So if you're beginning to I mean, and I think there are a couple of themes that have emerged from this conversation thus far which is like, if you presume bias, then how are you doing within the space, and then what are those on and off ramps if you are presuming bias because the way it has been, at least how you perceive it publicly is that, oh no we've created this new thing that's done this very bad thing we didn't anticipate. But now I do believe that we see folks who are saying that you can have that conversation and begin to have that dialogue about its impact as early as possible within the process. And so I'm curious of you for you all to sort of speak to that and where you see that working fairly well. And, and what you think that might mean for how we proceed down this road in the future. Yeah, I mean if I can just jump in real quick. I think that if companies in development, encourage these conversations from the very beginning that it has a potential to, you know, have the kind of impact that I think we're talking about. But the companies themselves need to be ready for these decision points right that if those decision points come that a decision will be made so so some maybe early like setup work around, you know if we face with this what will be our decision. And they should stick to it, and not then maybe fire the person that's like bringing that certain thing to light. You know, again I can't share but publicly but I've faced that personally in a project that I was working on that, you know, once certain things come to light then I was no longer part of the team. And so, you know, and I had no recourse to then, you know, inter intervene in that way so with that decision so sort of building, you know, so maybe going through some like possible scenarios and some exercises to kind of ready the company for a true commitment to this work in an ethical way might be a starting point. Anyone else. Go ahead Mona and then my hair. Okay, just really quickly I'm going to be the grand piece of the scientists just for a second. I find, and I really appreciate the Frankenstein question, because I find this incredibly hard to answer. And so as we look at sort of the structural issues that we're facing and they're they're really like that are really surfacing now that are thankfully in, you know, right in the middle of our conversations at particular when it comes to bringing in people into design whether that's through participatory design methods co design public engagement literacy, however you want to call it. The problem is what are the conditions under which this happens what are the structural conditions under which this happens and you know last year I wrote a little piece with a few other folks about what I think is a real problem which is that participation very often is very active by nature because we are in an extractive social system and so if we don't provide more equity when it comes to resource distribution, health care, childcare, like all these things. We can't even begin to sincerely think about what it means to make these socio technical systems and these interfaces between the public the technology corporations public institutions, more equitable so we need to have these conversations alongside conversations about technology and we need to be mindful of the fact that we can't be you know be bogged down by a conversation but oh how can we help, you know, make this technology better when when there's no structural change that's a real opportunity to have these conversations like we're seeing development here in the US and what I just want to add to sort of close that off is, I think there is a real opportunity also to engage locally with these kinds of questions for example here in New York City to have the city council committee on technology, there are public hearings there are supporting on that there's local advocacy groups universities are involved different kinds of you know artists are involved there is something going on and that's not just in New York. I would generally say that they're, you know, you can do this locally engaged locally to push for this structural change. There are a number of undergraduate students and graduate students who I work with control of you and panel have experienced this where they really understand this problem, and they want to be at the front lines of addressing it and really this is an opportunity for the folks who do the hiring to think about getting more of these students in your folds to give them opportunities to ask them what they've learned to learn about their perspectives because we're really at a generational shift here I think where the next generation of students coming through understand very well the implications of these technologies and how to think about that want to work in addressing them. And so I would encourage you to sort of think about that as a resource to look to your students for ideas and your new hires to think about them as a resource. Thanks for that point and I want to just throw this quickly to Mona before I go to a couple questions from the Q&A session. Mona, you've talked about a couple of that that you're working on an upcoming career fair. And one thing that you want to be sure to include in that career fair are these roles that you think could be expanded. Could you talk a little bit about that in terms of what you're thinking and so folks can begin to see a bit of a roadmap. Yeah, sure. Just a disclaimer, it's kind of early days that will be put on will be put on in the fall of this year. So what we're doing now is we are starting to have conversations with companies but also with other potential collaborators within the PIT UN network to just understand how they think about potential PIT roles at this point and how we can work with them to showcase these roles in, you know, during the event so in different kinds of ways. So, for example, through a case study workshop or a range of case study workshops that we'll be hosting, but also just like through showcasing people who occupy these ethics owners of PIT roles within these organizations at the moment and the diversity of them but also really really showcasing the work that students are doing already in these courses across the universities that are case study focused that are really, really, really high quality bringing together sort of the, you know, macro considerations with micro considerations and and center that in different kinds of ways for example through a competition and so the thing with that is that sort of relationship building right and having conversations because there is of course the template of a career fair which is very recruiting focused especially in the tech industry and we kind of need to disentangle that a little bit. We have a partner or we're partnering with all tech as human on that and who are already doing work in that space, and so it's sort of a mutually, you know, a process that educates both sides we hope and so the actual project, you know, it's not just the fair it's kind of building that conversation. Well you're creating a community which is what we really want to be conceived of in this space. So a couple of questions from our participants and again please continue to share those questions in the Q&A. In the chat. Would you say coming from a social science background is a benefit or barrier to data science. Who wants to tackle that one. Take that. I think that you can start anywhere and get into data science. It just requires commitment. It's not easy. I think that one of the, one of the mistakes people make is they imagine that all technology is the same that because, you know, you're because you really like using Instagram, you're going to want to be a data scientist and they're really different. But that said, data science is hard but it's not impossible and you absolutely should feel like you're qualified to do it no matter what your background is. And this other question the challenge was more with more data we the user allows services to obtain the more opportunities for that data to be using on ethical ways I think that's just an observation. When is the answer to creating a new data driven product know whether for ethical or lack of data reasons and how do you teach students and create incentives slash protections for them to say no, when it will have implications for their careers. Some of the structural issues that I think Mona was sort of getting at right. Go ahead, Mona. I just want to point to another resource by AI now who have put out a guide how to interview a tech company so they have a, you know, a little, put together a little piece where they help students wrap their head around how this particular organization works culturally and all kinds of questions. So I think in order to find answers to that those questions. I think generally it that question should at least be able to be on the table, you know, at the very least and I think that as a political project, very much. And I think we're kind of in the middle of that right now so there's no easy answer I think that's fair. Yeah, I mean I think we all we all fall into the, the issue with with like that hindsight is 2020 so like usually after the problem happens we can all say oh why didn't they see it coming you know like the seeing it coming is usually the hardest part and so I don't think you know is very easy, nor maybe it shouldn't be a no maybe it should be. Yes, and you know like gingerly along a path, especially I mean again, depending on what your goals are of your project. You know, and maybe building in checkpoints you know so there's a sense a sense of reflection and, and also, and perhaps even an impartial party being able to sort of give some insight into the progress of the development of a project. But I don't think any of these issues are very, very clear cut which is why they're so ripe for moral and ethical investigation because there is no clear yes and there is no clear no and every yes could also be a no and every no So again, there's always that, you know, sort of risk versus benefit ratio that has to be negotiated I think at every step but as Mona said I think at some point, someone or, or there should be some ability for there to be a way to intervene and say no. Meredith this might be a question that you can add some value to here, where and how does management play in into ensuring computer programs operations and or policies are ethical, especially in, especially in areas such as social media versus journalism media. What role does management play. Management is everything. I management determines the direction of a project management says, go ahead, even when I, you know, whether if concerns are surfaced. There is another thing at work that is very important to note which is a feature of group dynamics. When you are in a group, and you're talking about, you know, say developing a new algorithm. Nobody wants to be the person who says hey, I think this has the potential to be racist. Nobody wants to be the person who says hey, this excludes trans folk. People want to get along in groups. People want to keep their jobs people don't want to be perceived as the, you know, as the one who stirs the pot. So we just have to be aware of this and managers should be conscious of this group dynamic and make it safe to speak up. What is I a lot going on at Google right now. That I points to poor management, I managers, I, for example, would have employees coming to them complaining of racism, or, you know, talking about incidents of racism that they have experienced and then they would be. The managers of the HR people at Google would respond with giving them mental health resources. And you don't need mental health resources. When you're going to HR about having experienced racism at work you need HR to address it. I, you need, I, you know, you need remedies. I mean therapy is great, but it's not actually going to fix the problem of your coworker being racist. Katie. Yeah, I mean just as we deal with in higher ed also like workplace culture could be totally different than setting the impact of what you're developing right so you could adopt, you know, really equitable like cool like, you know healthy workplace environments and still make a tool that, you know, does some harm to vulnerable communities and I think that also can be a hard thing for a lot of, you know, companies to deal with like as Mary was saying like nobody wants to be the one to speak like oh I think this this might be racist or this might be transphobic, you know, you also could think you're really you have a lot of strengths on the interpersonal like work culture building piece of it but then miss a lot in terms of like the impact of what you're developing. And then, you know, one thing that we didn't mention especially for tech startups is that the amount of stress that is incurred when you have like angel investors and like people who are flooding, you know, your, your idea with a lot of money. It may be hard to stop that and say you know what we had to stop because of, you know, we saw the potential of this going wrong and so allowing for there to be ways in which people can manage the stress of doing the quote unquote right thing I think is something also that is beyond just how do you how do you deal with micro aggressions or macro aggressions in the workplace. And I think one of the things that's emerging here and you all have hinted at it is that there, there are, there needs to be a sort of a larger structural systemic approach to dealing with these issues that are not left to the individual alone right and so I noticed that there are some emerging things around papers being submitted at conferences and how do you tackle, how do you make sure that a paper that's been submitted hasn't been reviewed by a lawyer. So are there any sort of signals that you are seeing on a sort of larger scale that that you can offer up to the audience as, as an approach that you think is sort of worthwhile to support or to, to share the audience right now. We've set it in sort of not in non concrete ways but I'd love to hear about some systemic pieces that you think would be valuable in the hair. And I'll make a plug for a program that I'm doing as doing this is that we're trying to bring policymakers who have a kind of challenging question and do a case study session at CITP where you have computer scientists and social scientists and other people examine the multiple angles and give you feedback on what are some of the implications and I think one of the things that this addresses is sort of getting better. If you're raising is that sometimes internally it's hard to make that pitch to your fellow coworkers. What's the problem here but if you engage with a university audience which is more ready to sort of examine it from multiple angles and you can then battle test your idea and see how it's received and, and it's a, I think a lot of universities want their students to have that experiences as well so that they see how difficult it is to wrestle with some of these problems. And so that's certainly something I'm trying to do with the case studies. We've seen, you know, people come with their challenges, and how do we collect data responsibly how do we make certain kinds of decisions. And it's not that we're going to sort of give you the right answer, but it at least allows for the surfacing of multiple angles and multiple points of view. And that's something then the organization can take away and go do something with it. Thanks for that mayor and someone else someone has asked this question. How older people not understanding technology, often don't understand that most young people are end users and consumers, we've tried to communicate that what is a way well how do we effectively educate different communities about the impact of technological innovations on those technologies when it may not be immediately apparent. What are some good teachable moments for that kind of conversation. I can, I can start this one off. One of the things that I've read about is an idea I call techno chauvinism. The idea that technology is superior. And we, we need to push back against this. And we need to stop imagining that we're going to be able to replace all of our existing systems with technology and it's just going to be all, you know, sunny days ahead. So the COVID vaccine process is a good example of this. One of the ways, at least in New York State. At first, you could only sign up for a COVID vaccine online using these incredibly complicated web interfaces. Well, older users were not able to effectively navigate these interfaces. I know that I, you know, I, I had to make an appointment for my elderly relatives because they simply were not able to navigate these things. And now there is a phone number that you can call and get help as well. But there should have been a phone number from the beginning. So you can't just imagine that you're going to take your paper forms and put them on the web, and you're going to be able to automate your customer service line. I, using technology effectively means using the newest technology and also retaining the older modes of communication. You know, for, you know, when there's a hurricane and the power goes out or for older users or for users who have a range of abilities that are more comfortable with different technologies. So don't imagine that you're going to replace everything with new technology, you're just going to keep doing more and more and more. Thanks for that Meredith. Mona. Quickly to add to that. I think for me this also links back to, you know, resources for communities. One of the things I did last year was run a research project called tear incognita where we examined the emergence of digital public space and community, you know, under conditions of the lockdown. You know, and one of the things that we learned is that communities are incredibly good at, you know, providing or coming up with systems and infrastructures for mutual support and that was specifically also for elderly members of the community so they could participate in worshiping or in other activities. And so what these communities would need is sort of support to do the work that they are already doing as a community and recognizing that as labor. And I think that's a very important policy learning here that, you know, how Meredith described the stacking onto, you know, off technologies on top of each other as sort of community driven and maintained infrastructure. And I think it's a very important notion that we sort of should really be mindful of as we move forward. Thanks, and I think I'm going to let Katie you have the final word before we close out I thank everyone for taking the time to pop back in when we had our technical technical snafu. You know, we haven't really talked that much about COVID today and I think it is going to be interesting to see like if we can imagine what a post COVID world might be and and as Maris was saying that, you know, this quickly shift to like connecting virtually, you know, maybe could leave some users behind, but I think there was like an enormous learning curve that was also like overcome and that might have leveled the playing field a little bit even though, of course, there's unevenness with access and there's, you know, the how we dealt with really illuminated a lot of the disparities and communities as such but I also think that there's this enormous potential that now we do we may have a lot more people that are a bit texts more tech savvy but also have an understanding of how integrated technology is in our lives. Thanks for that. I want to thank my panelists for joining us today and for helping us sort of navigate this space of public interest technology you all have been so wonderful and contributory to this process for us. I think what what what I want to communicate right away is that this is an evolving space. It's a process. As you see as you've seen and as I think our panelists have said, they're continue to be evolving questions and challenges that we want talented and engage folks to participate in helping to define for us. I know Meredith has said this again, it's not done. This is a process. But I want to thank Meredith Broussard, Kathleen comiskey. I'm going to share, sit, sure as a girl, which I love I went and pronounce that and Mona Sloan. One thing I want to encourage you all to do is, we will be sharing some resources and some follow up contact details for all of the participants that registered for this. There are lots of mentions of different projects or different books or different things that folks have found really valuable. So I'm going to try to collect those from our panelists as best as I possibly can. And be able to share that out in an email to all of our participants thank you so much again for joining us on this Monday afternoon. Again, this is Andrine solely, and I'm the director of the public interest technology university network. You can find us at new america.org. And we look forward to continuing to engage you around these thorny issues around ethics tech and bias. Thanks again.