 Welcome to the September 24th version of the BKC lunch. Thank you for coming. Just so you know, before we get to the good part, just some housekeeping. First of all, this is being live streamed and recorded. So wave to everybody on Twitter and elsewhere. Just tell you that so you know that you are being recorded and govern yourselves accordingly. If you want to, please feel free to tweet at us questions at BKC Harvard or using the hashtag BKC lunch. Q&A session will be at the end after Dr. Benjamin talks. So now the good stuff, right? Dr. Ruha Benjamin is a scholar of the social dimensions of science, technology, and medicine. An associate professor of African-American studies at Princeton University, she's the founder of the Just Data Lab, which focuses on bringing a humanistic approach to data and reimagining and rethinking data for justice. She is the author of two books, People Science, out from Stanford University Press, as well as the new book, Race After Technology, out from Polyde, which is published this year. She's also the editor of Captivating Technology, which is out from Duke University Press this year and is available after the talk outside. So please join me in welcoming Dr. Ruha Benjamin. Good afternoon. How are you? I'm thrilled to be here. It's my first time visiting the center, and I'm so excited to be in conversation with my sister colleague, Jasmine McNeely, who actually read drafts, early drafts of Race After Technology. So I was able to incorporate her and others' insight into that work. So we have a little bit of time, and I know most of it is going to be left for discussion, so I'm just going to jump right in. Please join me in acknowledging the land on which we gather is the traditional and unceded territory of the Massachusetts. We acknowledge that academic institutions, indeed, the nation state itself, was founded upon and continues to enact exclusions and erasures of indigenous peoples. This acknowledgement demonstrates a commitment to beginning the process of dismantling ongoing legacies of settler colonialism and to recognize the hundreds of indigenous nations who continue to resist, live, uphold their sacred relations across their lands. With that, let me begin with three provocations. First, racism is productive. Not in the sense of being good, but in the literal capacity of racism to produce things of value to some, even as it wreaks havoc on others. We are taught to think of racism as an aberration, a glitch, an accident, an isolated incident, a bad apple in the backwoods and outdated rather than innovative, systemic, diffuse, an attached incident, the entire orchard in the ivory tower, forward-looking, productive. In sociology, we like to say race is socially constructed, but we often fail to state the corollary that racism constructs. Secondly, I'd like us to think about the way that race and technology shape one another. More and more people are accustomed to thinking about the ethical and social impact of technologies, but this is only half of the story. Social norms, values, structures all exist prior to any given tech development. So it's not simply the impact of technology, but the social inputs that make some inventions appear inevitable and desirable, which leads to a third provocation, that imagination is a contested field of action, not an ephemeral afterthought that we have the luxury to dismiss or romanticize, but a resource, a battleground, an input and output of tech and social order. In fact, we should acknowledge that most people are forced to live inside someone else's imagination. And one of the things we have to come to grips with is how the nightmares that many people are forced to endure are the underside of elite fantasies about efficiency, profit, and control. Racism, among other axes of domination, helps produce this fragmented imagination, misery for some and monopoly for others. This means that for those who want to construct a different social reality, one grounded in justice and joy, we can't only critique the underside, but we also have to wrestle with the deep investments, the desire even, for social domination. So those are the main takeaways. Let's start with some concrete examples. A relatively new app called Citizen, which will send you real-time crime alerts based on a curated selection of 911 calls. It also offers a way for users to report, livestream, and comment on purported crimes via the app. And it also shows you incidents as red dots on a map so you can avoid supposedly dangerous neighborhoods. Now, many of you are probably thinking, what could possibly go wrong in the age of barbecue beckies calling the police on black people cooking, walking, breathing out of place? It turns out that even a Stanford-educated environmental scientist living in the Bay Area, no less, is an ambassador of the carceral state calling the police on a cookout at Lake Merritt. It's worth noting also that the app, Citizen, was originally called the less-chill name Vigilante. And in its rebranding, it also moved away from encouraging people to stop crime, but rather now simply to avoid it. What's most important to our discussion, I think, is that Citizen and other tech fixes for social problems are not simply about technology's impact on society, but also about how social norms and values shape what tools are imagined necessary in the first place. So how should we understand the duplicity of tech fixes? Purported solutions that nevertheless reinforce an even deepened existing hierarchies. In terms of popular discourse, what got me interested in this question was the proliferation of headlines and hot takes about so-called racist robots. A first wave of stories seemed shocked at the prospect that in Langdon winner's terms, artifacts have politics. A second wave seemed less surprised. Well, of course, technology inherits its creator's biases. And now I think we've entered a phase of attempts to override or address the default settings of racist robots for better or worse. And one of the challenges we face is how to meaningfully differentiate technologies that are used to differentiate us. The combination of coded bias and imagined objectivity is what I've termed the new gym code. Innovation that enables social containment while appearing fairer than discriminatory practices of a previous era. This riff off of Michelle Alexander's analysis in the new Jim Crow considers how the reproduction of racist forms of social control in successive institutional forms entails a crucial socio-technical component that not only hides the nature of domination, but allows it to penetrate every facet of social life under the guise of progress. This formulation, as I highlight here, is directly related to a number of other cousin concepts we might call them by Brown, Rousar, Daniels, U-Banks, Noble, and others. And I'm so happy to see my colleague Jesse Daniels here. Take, for example, what we might term old-school targeted ad from the mid-20th century. In this case, a housing developer used this flyer to entice white families to purchase a home in the Lamert Park neighborhood of Los Angeles, which is where my grandparents eventually infiltrated, the language used at the time. But at this point in the story, the developers were trying to entice white buyers only by promising them beneficial restrictions. These were racial covenants that restricted someone from selling their property to black people and other unwanted groups. But then comes the civil rights movement, black power movement, Fair Housing Act of 1968, which sought to protect people from discrimination when renting or buying a home, but did it. Today, companies that lease or sell housing can target their ads to particular groups without people even knowing they're being excluded or preyed upon. And as pro-public investigators have shown, these ads are often approved within minutes. Though it's worth noting that in just the last week, advocacy groups have brought the first civil rights lawsuit against housing companies for discriminating against older people using Facebook's ad system. So this idea of the new Jim Code is situated in a growing literature that I think of as race-critical code studies, an approach that's not only concerned with the impacts of technology, but its production, and particularly how race and racism enter the process. And I would encourage a book club and just check out all of these wonderful works that I'm in conversation with. So to get us thinking about how anti-blackness gets encoded in an exercise through automated systems, I write about four conceptual offsprings to the new Jim Code that fall along a kind of spectrum. And at this point in the talk, I would usually dive into each of these with examples and analysis, but for the sake of time, I'll save this for the discussion. If anyone's interested and then shift gears now to discuss forms of mobilizing against the new Jim Code. Like abolitionist practices of a previous era, not all manner of resistance and movement should be exposed. Recall how Frederick Douglass reprimanded those who revealed the routes that fugitives took to escape slavery, declaring that those supposed white allies turned the underground railroad into the upper ground railroad. Likewise, some of the efforts of those resisting the new Jim Code necessitate strategic discretion while others may be effectively tweeted around the world in an instant. Exhibit A, 30 minutes after proposing an idea for an app that converts your daily change into bail money to free black people, Compton-born black trans tech developer Dr. Courtney Ziegler added, it could be called abolition. A riff on abolition and a reference to a growing movement toward divesting resources from policing and prisons and reinvesting in education, employment, mental health and a broader support system needed to cultivate safe and thriving communities. Calls for abolition are never simply about bringing harmful systems to an end, but also envisioning new ones. After all, the etymology includes root words for destroy and grow. To date, abolition has raised more than $137,000 at money being directed to local organizations who've posted bail freeing at least 40 people. When Ziegler and I sat on a panel together at the Allied Media Conference, he addressed audience questions about whether the app is diverting even more money to a bloated carceral system. But as Ziegler clarified, money is returned to the depositor after a case is complete. So donations are continuously recycled to help individuals like an endowment. That said, the motivation behind ventures like abolition can be mimicked by people who don't have an abolitionist commitment. Ziegler described a venture that Jay-Z is investing millions in called Promise. Although Jay-Z and others frame it in terms of social justice, Promise is in the business of tracking individuals via the app and GPS monitoring, creating a powerful mechanism that makes it easier to lock people back up. Following criticism by the organization BYP 100, we should understand this and other forms of e-carceration as part of the New Jim Code. Dangerous and insidious precisely because it's packaged as social betterment. So where might we look for real Promise? For me, one of the most heartening developments is that tech industry insiders have increasingly been speaking out against the most egregious forms of corporate collusion with state-sanctioned racism and militarism. For example, thousands of Google employees condemn the company's collaboration on a Pentagon program that uses AI to make drone strikes more effective. And a growing number of Microsoft employees are opposed to the company's ICE contract saying that quote, as the people who build the technologies that Microsoft profits from, we refuse to be complicit. This kind of informed refusal is certainly necessary as we build a movement to counter the New Jim Code, but we can't wait for workers' sympathies to sway the industry. Initiatives like Data for Black Lives and the Detroit Community Tech Project offer a more far-reaching approach. The former brings together people working across a number of agencies and organizations in a proactive approach to tech justice, especially at the policy level. And the latter develops and uses tech rooted in community needs, offering support to grassroots networks, doing data justice research, including hosting disco techs, which stands for Discovering Technology, which are these multimedia, mobile neighborhood workshop fairs that can be adapted in other locales. I'll quickly just mention one of the concrete collaborations that's grown out of Data for Black Lives. A few years ago, several government agencies in St. Paul, Minnesota, including the police department and the St. Paul public school system, formed a controversial joint powers agreement called the Innovation Project, giving these agencies broad discretion to collect and share data on young people with the goal of developing predictive tools to identify quote, at-risk youth in the city. There was immediate and broad-based backlash from the community with the support of the Data for Black Lives Network. And in 2017, a group of over 20 local organizations formed what they called the Stop the Cradle to Prison Algorithm Coalition. Eventually, the city of St. Paul dissolved the agreement in favor of a more community-based approach, which was a huge victory for the activists and community members who had been fighting these policies for over a year. Another abolitionist approach to the new gym code that I'd like to mention is the Our Data Bodies Digital Defense Playbook, which you can download for free online and make a donation to the organization if you're inclined. The playbook contains in-depth guidelines for facilitating workshops and group activities, plus tools, tip sheets, reflection pieces, and rich stories crafted from in-depth interviews with communities in Charlotte, Detroit, and LA that are dealing with pervasive and punitive data collection and data-driven systems. And the aim here, as the organization says, is to engender power, not paranoia when it comes to technology. And although the playbook presents some of the strategies people are using in the spirit of Douglas's admonition about the upper-ground railroad, not everything that the team knows is exposed. Detroit-based digital activists, Tawana Petty, put it bluntly, let me be real, y'all get in the digital defense playbook, but we didn't tell you all their strategies, and we never will because we want our community members to continue to survive and to thrive, and so the stuff that's keeping them alive, we keep into ourselves. And finally, close to home, the work of my brilliant colleague at MIT, Sasha Kasanza-Shok, and the Design Justice Network. Among the guiding principles of this approach is that we prioritize design's impact on the community over the intentions of the designer, and before seeking new design solutions, we look for what is already working at the community level. The fact is data disenfranchisement and domination has always been met with resistance and reimagining, in which activists, scholars, and artists have sharpened abolitionist tools that employ data for liberation. From Du Bois's data visualizations that sought to counter the racist science of his day, to Ida B. Wells Barnett's expert deployment of statistics in the red record, there is a long tradition of challenging and employing data for justice. In that spirit, the late legal and critical race scholar, Harvard professor Derek Abel, encouraged a radical assessment of reality through creative methods and racial reversals, insisting that to see things as they really are, you must imagine them for what they might be. And so one of my favorite examples of what we might call a bellion racial reversal is this parody project that begins by subverting the anti-black logics embedded in new high-tech approaches to crime prevention. Instead of using predictive policing techniques to forecast street crime, the white-collar early warning system flips the script by creating a heat map that flags city blocks where financial crimes are likely to occur. The system not only brings the hidden but no less deadly crimes of capitalism interview, but includes an app that alerts users when they enter high-risk areas to encourage citizen policing and awareness. Taking it one step further, the development team is working on a facial recognition program to flag individuals who are likely perpetrators. And the training set used to design the algorithm includes the profile photos of 7,000 corporate executives downloaded from LinkedIn. Not surprisingly, the averaged face of a criminal is white and male. To be sure, creative exercises like this are only comical. If we ignore that all of its features are drawn directly from actually existing proposals and practices in the real world, including the use of facial images to predict criminality. And so less fictional, more in terms of getting involved, I would encourage those who are interested to sign up for this webinar, which is we might think of as a movement to track the trackers. Going back to my collaborators in St. Paul, they're trying to build up a national network of people who want to be more involved in this. And I'll tweet this out later for those who don't have a chance to snap it. So if, as I've suggested at the start, the carceral imagination captures and contains the abolitionist imagination opens up possibilities and pathways, creates new templates and builds on critical intellectual traditions that have continually developed insights and strategies grounded in justice. May we all find ways to build on this tradition. Thank you for your attention. Obviously, like I said, this is the good stuff right here. Just as moderators for this question. So, new book, Race After Technology. And you mentioned some of the chapters, but could you talk a bit more about the ideas, the concepts that you cover in each of the chapters and the distinctions between them? Starting with the Black Studies, the critical race approach, I just went into it, trying to show the productivity of racism and finding racism everywhere. And one of my early readers, one of my colleagues, was like, hold up, Ruba, hold up, okay. You can't just say everything is racism. You need to make some distinctions, create some categories, fit things in a little box, the way sociologists like. And so part of what my next step was, was trying to think about what are some of the ways to differentiate what I was seeing. The way that code advice was operating in different systems. And so that led to the chapter breakdowns from, the way you can think about it from engineered inequity to techno benevolence is going from more obvious forms of the New Jim Code. Things that you kind of see coming where the designers are trying to create hierarchies and inequality to going down the line to the point where people are trying to address bias through technology, create various tech fixes. The promise being that this will allow us to sort of bypass these human subjectivities and biases. So going through the chapters is from more obvious to more insidious forms of coded inequity where beyond the intentions of the designers, you can still reproduce inequity, right? So it's trying to disentangle the intention of the insider to do good. And for me, that's where the more interesting bit is like where the ethos is to solve social problems through technology, yet unwittingly, one could still manage to reproduce these biases and inequities. And so that's the kind of breakdown that the chapters go and those sort of categories go. And it's not, and importantly, it's not to make these bright lines between these different lenses but to sort of draw kind of a spectrum to see the way that beyond the intentions, these effects can take place. Now, part of the conversation in the book is about not just whether certain technologies should be built. You mentioned vigilante was rebranded as something else. So can you talk about how we talk about the language we use when we talk about technology, frames, adoption? Yeah, absolutely. I think one of the most surprising things for me when I started the project to the present is how the growing public consciousness around this and a kind of deep skepticism towards technology has grown more powerful. I thought when I started this, I would have to be more under defensive with these conversations. And so in terms of branding, the more that the public critique is growing and awareness is growing, there's a need to regroup on the tech side and be more attentive to that. And so part of that is just in the framing, the language. We can think of it also in similarly to the way that Michelle Alexander with the new Jim Crow, there's a kind of sea change where across the political spectrum, from far right to left, there are efforts to reform the criminal justice system. And so that consciousness has led to a much broader sense of change needs to come, but it's precisely how change comes, how within the context of reform, you can deepen the tentacles of a carceral state in that case through reform. And one of my colleagues, Naomi Murakawa, has a fantastic book that shows the role of the Democratic Party in this process over the last two decades. So I think that's where I'm most interested is how our desire to do better can actually backfire and reinforce various events. So we're gonna open it up for questions. Anyone have a question? They're dying to ask. Hi, thank you for being here. That's a really inspiring talk. I'm a new fellow here, and you mentioned this aspiration of having the impact on community Trump, the intention of the designer. That's something I'd really like to see as well in a lot of the technologies I work closely with. But how do you envision that working? I mean, is that by law? Is that by subverting the technology? How would you achieve that? Yeah, I mean, so I pointed to the design justice network. And I think part of the challenge is that well before the process of designing any given thing, the social infrastructure, we have to care as much about the social infrastructure and the relationships than we do the product. And so what I see the network trying to do is to seed a social infrastructure and a kind of reciprocity between designers and communities so that one of the other principles being that the designer is not the expert in the process. Like you have some skills, but you have to really take your marching orders from the needs and the concerns of those who you supposedly are trying to serve. And so there's no magic bullet, like follow this checklist and then you can ensure that you have a participatory tech project, right? That's kind of what is demanded though. People want like the easy fix, even the easy social fix. How do we quickly create sound relationships so people feel heard, right? As opposed to thinking about the hard work of laying this groundwork that requires a kind of reorientation, even about how we think about the relationship between the academy and surrounding communities. And it does not just applicable to tech, you know? People doing all kinds of, you know, creating all kinds of knowledge are often divorced from the populations and the people who it supposedly meant talking about and talking for, right? And so the examples that I gave for me, you know, if you think about what's happening in Detroit, we think about design justice, but also like the Stop LAPD Spine Coalition, they did research on their own in terms of gathering interviews of what it feels like to be surveilled on a daily basis and created basically qualitative research that was very participatory. The people from the communities were doing the interview, crafting the questions, then producing, you know, the outputs. And so I think we have a lot of examples, but there are structures within the academy that make it less feasible in terms of the timeline, you know, how are you produced, published, that run against building these relationships. Thank you. I am, I work as the diverse director of diversity at the engineering school here. And I would love to hear your perspective on what is the role of higher education in that we are training the next generation of engineers and scientists. What is our responsibility in making sure that they are able to approach these new inventions and the things that they're working on with this perspective? Yeah, I mean, that's one of the main areas of concern for me. So if you think about the various routes, the various arenas for change, like we have people working on litigation, how do we litigate algorithms? You know, we have people working on legislation, creating bands and moratoriums and accountability structures. We have organizing, we have tech workers organizing, communities organizing, but then we have this whole arena, which for me is kind of like ground zero, it's thinking about education, training, pedagogy, as where we begin to see new ways of thinking about our relationship to the material and digital infrastructure. And that's where, that's for me, is where I feel like I have the most input because I'm a teacher, knowing that these other arenas are important for people to be working in. And so I think it's heartening to see pockets within the university that are taking seriously this idea of like public interest technology. This is a new kind of framework that's gaining whole, but in all examples, I think it's really important for us to be vigilant that the ethics of technology doesn't become this kind of sort of token thing that we throw in as an afterthought or the end of training or optional. Like if you have time in your studies, then you can take this class or at the very end, when everyone's tired at the end of the semester, then we'll throw in the critical books, right? And so I think it's really important to think very carefully about what the structure of inclusion is because you can include things in a tokenistic way that simply reinforces its inferiority as a way of thinking and as a framework rather than trying to integrate it. And it's not easy, again, going back to the question of how you build relationships. There's not like a three-point, just do this and it'll magically sort of like Gary does be integrated. But I think partly people like yourself who are in positions to raise awareness and to build up a collective of people that are calling for change. And so I don't think students realize the power of their voice within academia. Like I think about medical schools and I've been brought into medical schools to think about how race is incorporated in their pedagogy. And students basically saying, white coats for black lives and other organizations saying, we don't feel like we're equipped to go out into the world and be doctors if we don't have X, Y, and Z set of skills with respect to race and inequity and so on. So it's almost the beneficiaries of this education are saying, you're not training us up right. And it's an interesting reversal in terms of who's taking the lead. But I do think that like white coats for black lives, it would be heartening to see engineering students and other students in STEM fields understand that this is beyond their, they're only their university to think about it at a national and in a broader sense of building a movement of students who are calling into question these fields and their training and within them. And I also welcome, I know people often say, don't make a comment, ask a question, but I also think this is a discussion. So you should feel free to just reflect out loud and give ideas, correct me if I said something or elaborate, you don't have to only ask a point in question. Hi, Dr. Benjamin. So my question is around, what do you want this book to do in the world? Because the New Jim Code, one of the things that I think about in Michelle Alexander's work is that she clearly set out to start a movement. Is your use of that term, is it that you're looking to do something similar? And then a follow up and related question is who's reading this book and what do you hope for them to do? Yeah, that's both great questions. And I see the book and myself as part of an existing movement of people who are calling into question techno-utopianism, as it were. And so I don't see it as kicking off anything, but I do see it as more of a provocation than an end statement, a conclusion of this is what we do, right? Or marching orders, it's more to sort of provoke. The main reader I had in mind when I first started were my own students, because I had students from across the university coming from humanities, social sciences, black studies, along with my STEM students, my engineering students. And so part of it is to stage a conversation, to show how the sort of meaning of the minds, like your interests sort of should converge. And so that's what I'm trying to do in the book is to bring together these fields and also my students and conversation and the people who care about these different things who may not necessarily be talking to one another. And so it is a provocation to jumpstart conversations so that people have to talk to each other. And then the questions that often come up are similar to how we started, which is students coming out of engineering, computer science say, how can I contribute? What should I do differently? What should I be thinking? And so thinking about how to respond to those kind of questions in a productive way. And then your second part was, the second part was what should we, so much of my work is in translation. Yes. Is to do something about these issues. Yeah. I read the, yes. Yeah, no worries. You've read a rough, rough graph. Yeah. So to answer that, for me, it's really about these different spheres of influence and activity. I don't want to create a sense that if you're not doing X, Y, and Z, you're not contributing. It's really thinking about what your sphere of influence and activity is and how you might raise a policy if you're working in the private sector. I talk a lot with K through 12 educators. And so part of it for me is to seed these ideas before the students get to even. I think it's relevant to how we teach in secondary education. The other audience are the tech insiders who already care about it. And the way that I think about this growing movement, like tech won't build it, like these are people who took a good sociology class when they were in college. And they're like, ah, they're over there at Google, making trouble and like, yeah. Seeding that, you know? But then the thing is, I want the book and my own sort of position in the academy to lend legitimacy to their concerns so that they can be like, well, you don't believe me, check out the Polish book, right? And so part of it is the way that the book can be used to bolster what people already care about and we're already trying to do, but the book itself becomes a tool to contribute to raising that kind. And that's been happening. Like those kinds of, you know, connections have been happening. Hi. Hi. Thank you for being here. I'm having a little bad moment because I follow so much of your work and always makes me feel so seen. So thank you. Kathy Pium. Nice to see you. Oh, nice to see you. Hi. Hi. I'm the internet. And I, so one of the things I do is, this is a comment and then a question. I help lead and run an organization that's currently funding CS programs to integrate ethics into curriculum. Some of the folks in this room are having universities who are working on that as well. And as you know, interdisciplinary work can be quite difficult and some of the approaches to ethics can be, you know, pair of philosophy with computer science and we're done. And no, and someone laughed, but it's like, it's, it's definitely a belief that's taken, taking off in lots of different parts, including companies that are building practices out of philosophers coming into tech companies to do that as well. Because oftentimes, sometimes tech will lump anything that is not engineered in as a whole discipline. And so, whether it's a philosopher or a race and gender scholar or a socialized, it's all the same. And if we pick one, any one of them, we will build more ethical technology. So my, so that was the comment. The question is what have, you've been so deeply in the field, what are some of the ways you've been successful in getting into medicine or getting into different groups to really get folks to take this work and figure out how to bring it into their field. Not just, I heard it at the seminar, cool. I believe you, but you know, I'm back to my day job. But how have you seen effective ways to really get people to take this and really internalize it and use it? No, I mean, that's a really important question, but in some ways, like I have the easiest job because in terms of real work is those who are embedded in an institution that then take the ideas and they're trying to institutionalize and they're trying to change things. And so, as a provocateur, sort of coming in and out of spaces, talking with students at a medical school or talking with, you know, technologists in a company, like my job is the easiest, I think. The hard work and those who have to then grapple with the politics of the place, the sort of intransience, the way that, you know, people often give lip service to great ideas, but then when it comes to implementing various things. And so, you know, that is the acknowledgement that I'm deeply in the field. In some ways, I'm sort of on the surface trying to bring together many different ways, you know, communities and coalitions and so on, but it's an easier job than those who have to work through the nitty-gritty, you know? And so that's why I wanna like give respect to those who are actually trying to enact these, you know, changes within their locale, within their institution, rather than, you know, that's not my, and I would hope that people wouldn't look to kind of helicopter experts to come in and create a workshop. Now you're certified in, you know, the sociology of technology, right? I would hope that's not how people would think about the work and a book like this or, you know, this conversation as a stand-in for more substantive and difficult conversations and changes that have to happen at the policy and practical level as well. Over here. Thank you so much, Dr. Benjamin. My name is Savela Sambi, and I'm a fellow at the CAR Center in Berkeley and Cline as well. My personal beliefs are that racism is not always so out of ignorance. It's rational. It's a system, and as you say, it's productive, right? And so it seems like people understand this, like the so-called perpetrators or the people allowing this technology to cause the social harms. They sort of understand the power dynamics. And I'm wondering, considering that some people understand this deeply well, how do we shift the power in balance to actually have them do something about it? Because it's a different point to know something that it's wrong. And then it's another point to actually go and reform. And I don't think, at least in this society, we see a lot of reform, whatever sector we're looking into. So how do we go from this step of like, once we know that this is wrong to reforming this industry and just this society in general? I mean, in some ways, you're asking for, and it's a very important question we should all sort of continuously wrestle with, is it kind of what is the theory of change? Like how do individuals change? How do institutions change? And that's a hard question, right? And so for me, when I think about different strategies at the level of like, what do we do? How do we organize? There are ways we can think about making the status quo untenable, like making things unworkable. That's where you have protests, you have walkouts, you have whistleblowers, you know? So it's the kind of stick version of this. But then for me, what, and it kind of speaks to where I spend most of my time, which is in the educational realm, it's like, how do we also make change desirable? In the same way that domination is desirable, is there a way that we can make people crave change, and make that seed, that longing for something other than what we already have? And so that's the kind of carrot version of this in a way. And one of the ways that I think about it, and it may be a kind of social utopianism in itself, but I think about this idea of linked fate, which has often been used to describe black communal relationships. But I think about it at a more universal scale in the sense, and then we look at my work on public health and medicine, the way that those who are on one level, the perpetrators of unjust systems, and one level seem to be benefiting from oppressive systems. You go down just a bit, and you can see actually that people who are the supposed beneficiaries of an unjust system are also harmed by them in various ways that they may or may not be cognizant of. And so we think about the level of public health if you look at different local states, countries where there is greater inequity, that the haves in those contexts often fair worse than the haves in contexts where there's more equity. So that's just one, but that's kind of an empirical question. There's some research to support that, but it's an open question and I personally would love to amass more data for us to see the way that inequity is beneficial to all, right? It's not a zero sum game. There is some of that. You have to be willing to give up some shit for us to move on. But if you look at so many public health crises with respect to white Americans who are the supposed beneficiaries of this racist system, it tells a different story about how monopolization of resources, being the over-served in a context, can bite you in the ass. You look at the opioid crisis, you look at the reproductive health of white women, right? Some of you probably seen the unnatural causes episode when the bowel breaks, California newsreel documentary. I used it in a lot of my classes. And one of the segments of that shows that if you took white women as a country in and of themselves, their reproductive health is worse than many other countries, right? And so it's important to focus on the obvious targets of a racist system. In this case, black women's reproductive health is a critical part of that. But for me, it's also interesting to think about how those in this context, you seem to be doing good. But if you zoom out a little bit, that actually you're not doing that great white women who are going around policing everybody like Jill. Like maybe if you were Jill, Becky, then you would be your stress levels and anxieties and your need to self-medicate. And all of that looking over your shoulder is internalized. And it gets under the skin. And so that's, for me, a grounds for thinking about how you might seed a desire for change among those who, on one level, seem to be really benefiting from this pillaging, as Coates would say. And so that's where I spend a bit of my time. And I would encourage others just to think and actually maybe produce a clearinghouse of how inequity harms us all. Jessica Field, I'm the Assistant Director of the Cyber Law Clinic here at Berkman Klein. And we do work on the domestic dimensions of tech and exclusion on civil rights and issues like that. But we also do work on the international dimensions of it. And I was wondering if you could talk a little bit about how you see this working out with so many US-headquartered tech companies that are focusing on perhaps US dimensions of the problem and how tech and exclusion plays out, then when those policies are applied on the global stage, whether with respect to human rights, I do a bunch of work in sort of tech and human rights, or more generally. Yeah, that's a great question. And that's a really opening to encourage, I think, more work for people to think about how, so this idea of the New Jim Code is evoking a US history of white supremacy by evoking the New Jim Crow and Jim Crow. So it's really situated in thinking about how things play out here. So the challenge is to think about what are the socially salient hierarchies in any region or any country? And then to ask similar questions of power and inequity and how they collude with technology in a particular place to see how it plays out. And so there's things in common, I'm sure, we're fine. So I have a few examples in the book drawing from India's national ID system and how that creates all kinds of new caste exclusions, people who are left out. If that ID biometric system is the gateway to access almost every public benefit and you don't have one and how that can be used to create various exclusions looking at the way that Muslim populations are treated in China. So I have these very short vignettes that are kind of teasers, really, to say this is not just a US problem, but how it's playing out and how we need to study it, I think, need to be situated. So I don't offer this as a kind of universal theory to explain everything everywhere. I think that that's actually counterproductive. And it's indicative of a particular way of thinking about knowledge as, unless it's a grand theory, it's not very useful. And so I'm really encouraging students and other researchers to take kind of the core questions, moving beyond how to create ethical tech to thinking about power and technology and how these are co-produced and ask this of various places and I'm so happy to hear that's part of the mandate of the unit that you work. Hi, my name is Zarena Mustafa. Hi, I'm A1L here and I used to work on the Hill and I was very interested in criminal justice reform and one of the ways I started looking at the intersection of tech injustice was with the First Step Act and how they had the risk and use assessment tool in that act and I was like, this is problematic. And so I guess I'm concerned about or I'm curious about what you would say is one of the most dire areas to start in and I know you said people should operate in their spheres if they're in the private sector, they should do work there. But what do you think is like really urgent? And also what do you think, where do you think we have to give, like you said, like in airports, they're going to start using a high metric. Oh, they have. I know. And there's has my space. Yeah, last week, yeah. So should we oppose everything and where, I guess where do we give and where is the most, the areas that we need to focus on? Yeah, those are two really hard and important questions. The second one I would say, especially is a question that shouldn't be answered by one individual. It's like one of those questions that have to be part of, that's part of the struggle, part of the deliberation, part of the movement building is to think about how we create that like prioritization together rather than you give a sort of marching orders. But for the first question, I'm really interested in those sites that are offering alternatives, like the tech for good ethos and people producing products and interventions that are various tech fixes for a problem, whether it's the problem of public safety, or for example, the problem of fair hiring. So there's all of these new software programs that are being rolled out in hundreds and hundreds of companies and organizations to make the process of hiring people more efficient, but also the promise is more fair because it's not a human interviewer sitting there and judging you, but it's a data-driven system that's collecting thousands of data points based on the way that you move your lip or your eye or look down or up and posture. And so the idea is that the more data will lead to more fair outcomes, right? And so it's these areas where the promise is so high that not only is it gonna do what technology is meant to do, make things easier, faster, more efficient, but it has a social mandate as well to bypass bias or create inequity. And so I'm not saying that I want everyone to rush to study these tech for good things, but that's what draws my interest precisely because of the promise, but I don't see it as divorced from the more obvious harms. And so we do need people thinking about how tech is being rolled out in the context of the carceral state, right? There's a whole new way of initiatives to incorporate e-carceration, techno-correction, ankle monitors, tracking. And so there is, for those who are interested in a coalition of people who are working on this and working on legislation around it as well, not just sort of academic critiques of it. And so I want us to spread out. And so the more and more obvious things that are coming at us that are essentially wrapping these shackles around youth and tracking them and calling them and finding ways to create technical violations to lock people back up, right? To, across the spectrum, all of the things that are coming off down the pike that are gonna save us from ourselves, I think we need attention. And not strictly like a cynical posture, like it's definitely gonna be bad, no matter what. Like that's not what I'm encouraging. But for us to have a sort of sociological skepticism, like we've seen this before. We've seen things be promised as reform and then there are new ways to contain people and control people. So it's based on research that our skepticism has formed and that we can then engage it. But it's not a simple like anti-tech posture, which is the way people like to hear it. They like to hear it as, oh, you're just against everything. And so that's, I think, important for us. Hi, Veronica Lewis. So I run a technology and management consulting firm in Kendall Square. We're part of the Kendall Square Association. I'm also in the Harvard Extension School and a part of the MIT-Saw community. So to kind of address some of the things that I've heard in here about how can we kind of organize and kind of push back. Solve is a community at MIT that focuses on like the United Nations Sustainable Development Goals. And I literally just came back from the UN on Sunday. What is the name of Solve? Solve? Yeah. So it, and solve.mit.edu. And so one of the things that we definitely focus on is the future of work or actually we call it work of the future. We know work is gonna be around. So what does that look like? And because there is a lot of human-to-machine interaction with AI and I travel a lot. I'm in the airport, they're scanning my face. I just scan my vitamin bottle before I came and ordered it from Amazon just one scan of the picture. There's a lot of concern around it. And there are a lot of people who are mobilizing in ways globally as well too to say that, we can push back in our Kendall Square Association to the anniversary meeting earlier this year. Jeremy Hyman spoke. And if you were there, tell him he was amazing. But his new book was called New Power and he talks about the collective power that we all have and the dichotomies of between old power and new power and what that looks like. So when you're thinking, it's just us against the big corporations or us against all of the tech pros in Silicon Valley. How do we mobilize against them? Our voices right now are just so strong that if we say no to something collectively, they're changing, I mean, in so many ways. And so there are some characteristics of new power that drastically differ from what old power looks like and it's so much more powerful. And so for the first time next year, Mackenzie pulled together some numbers and I'll be willing to share all of this data with you. We're gonna have five generations in the workforce. Generation Z, generation Y or the millennial generation, generation X, baby boomers and traditionalists. However, millennials are going to be the majority of that workforce, 50%. So when you're asking, how can I make my voice heard? What can we do about racism? Get out and vote seriously. You're gonna be more the population than any other demographic, right? The second runner up to who's gonna be the majority of the workforce is a tie between Gen X and Gen Z. So Gen Z is in the workforce now and they're gonna be about 20% of the workforce along with Gen X. Baby boomers are 10% and traditionals are 1%. Traditionalists being that depression era generation. And so your voices again, have that collective new power that could say, hey, we're not gonna stand for this. And then secondly, it's just a comment about AI and bias and how we're working together. I did a talk in Berlin earlier this year with Oracle, our company partners with Oracle and some tech. And it was a women in technology panel and it was talking about how can we avoid our biases creeping into AI because we're coding all the time. Germany has a Kuhn's leash commission that's made up of philosophers and doctors and technologists and so many people. And so I love what Dr. Benjamin said. It's not one person coming in as an expert and giving you a one day workshop and hey, we're all ethicists now. It is us creating some rules and laws. Is this equitable? Is this fair? Is this transparent? Is this inclusive? As we're thinking about how we use data, how technology is being even created. Should we create it just because we can? Doesn't mean that we should. So I told them then I think that that's a great thing that they're doing and we could stand to learn a lot from, ironically so right, we're learning a lot from Germany. We could say a lot to learn a lot from what they're doing here in the US by creating that. We've heard about Amazon recognition being bad initially and we've heard about Google's board that they created their ethics board that they created in one week and it shut down because they have the wrong people on the board. We need to continue vocalizing and voicing, that we wanna be a part of this change. We are serious about this. And the reason I mentioned the SAW community is because every year when we go to the UN to kick off the General Assembly week, there are so many new what we call solvers or entrepreneurs who are coming up with these great ideas that kind of address social impact issues. But it's like we're also dispersed and it's great. I think we should be global but we should all come together and work together so that our collective voices can be that much more powerful. Thank you, I know that was a lot of help. Can you say your name again so people locally can reach out to you? Absolutely, Veronica Lewis, you can find me on LinkedIn and on Twitter, I'm at 404 Consultant from Atlanta. Thank you. But you're based here? Yeah, oh, sure. Thank you. Thank you. Hi, thanks Rohan for your talk and for your work. It's really amazing and wonderful and I'm a huge fan, as you know. Just really quickly, a question to which there, I think there is no answer, but I'm going to do what I can. So one of the core values in computer science is abstraction. And there's a recent article by Janet Ritesi and Dana Boyden, like three other people that may set points or to go through what core values are. But the one that stuck with me when I heard a presentation of it was abstraction, right? And I think that's part of the appeal around philosophy. It's like, oh, look, they're doing abstraction too. And your work reminds me of a now very old article by Cornell West, back when he was writing curfewed articles about the specificity of Africa. He's ascended, he's doing other things. But the article is about the specificity of African-American oppression. And I'm not pretending to be 14 times when I was in graduate school, reading that article to finally understand what he was saying, but it's very generative. So my question, to which there is no answer, is if you have thoughts about how we get past abstraction to the specificity of the productive racism of US-based tech companies. Because I think a lot of people, certainly not in this room, but a lot of people who are raised in the category of white are going to hear your work and think, oh, that doesn't apply to me, because I don't have race. And I just want to use tech for good. And so that's good what you're doing for the black people, although we don't need that. So I just wonder how to penetrate that belief in abstraction, because I think it's so core to what you're doing. Yeah, no, thank you. That is a really powerful way to frame the problem. And the thing I would sort of put in the mix there is that in many ways I see abstraction, the opposite of abstraction, is not only specificity, but it's also a kind of everything is everything. It's a way of thinking not just in the abstract, but an encompassing vision. And that's part of what this sort of data drive to collect everything. And there's two chapters in particular that I think start to get at this question in captivating technology. One is by Tamara Knopper. And she's looking at financial technologies and how you have FICO on the one hand that's like the epitome of abstraction and reduction. But then the alternatives that are coming down in terms of fintech, they counter that abstraction by saying, we are going to get data on everything and then calculate your risk and character. And so it's a way of thinking the alternative to abstraction itself is seeding all kinds of new forms of surveillance, where it's not just your economic activity that's being quantified, but it's all of your social connections. Whether someone you know has defaulted on a loan, that's part of your own risk assessment. And so it's interesting to think about, again, how the alternative to abstraction creates this encompassing vision of how to calculate and how to sort of manage risk that can be even more controlling and oppressive. The other chapter, Ron Egglash, writes in a more historical vein. And what's interesting, and I didn't quite get it until I was editing his chapter and reading it, is talking about how we associate racial science and eugenics with only this idea of sort of robbing people of their individuality and sort of producing people to categories, again, this reductive vision. And one of the things he argues and shows is how holism and this idea of a holistic theory itself has been a tool for racist science and oppression, this way of thinking about trying to explain everything. It's not being specific, but it's being general and sort of trying to encompass everything, how that was also part of the kernel of Nazi science, US racial eugenics, and so on. And so I would encourage those who are sort of interested in this part of the conversation to check out, especially those two chapters, because it troubles in some ways our easy categories of the bad stuff that we need to watch out for is only the reductive stuff, only the FICOs, only the crude racial categories. And I think we also have to be alert to and thinking critically about the stuff that doesn't look like that. That's about all encompassing, a way of thinking that can be even more, I think, oppressive in the end. I think we're kind of out of time, right? We have one more question. Can you see it on my hand? Okay, wonderful. Hi, I'm Bowman, I'm also a fellow here. And it says to me, raising my hand, because this might be a complicated question. So I think of abstraction, formalism, and modeling as kind of making things that are supposed to apply in a lot of different cases. And a lot of the problems I see come down to every abstraction inevitably failing somewhere, even if it was made for, quote unquote, good purposes. And I wonder how much we can design better abstractions and how much the problem is abstraction itself and how we can exist as a civilization or how we can rethink that if we're not relying on abstraction at some level, whether that's law, whether that's statistical measurements, whatever that happens to be the abstraction. No, I do think that that really, it does tell us with Professor Daniel's question. And I don't have a pat answer like we do or don't, we can't or can't live with abstraction. But I'll just, perhaps I'll let that question linger and those who are looking for good dissertation topics can write that down. It is, it's a tough one. And I think when you dig deep into the examples and the questions that my work is posing, you don't eventually arrive at this process, all right? And that's a good one. Thank you.