 Hi everyone. Welcome to the Tanner Lecture second event of the two event program honoring our series tonight. And we are thrilled to welcome Kate Crawford back to the stage and to respond to her wonderful, her talk yesterday. We have three extremely well known and highly respected scholars here with us. Before we get started, I want to acknowledge the situation that's going on in the world. It is, it is regrettable. It is horrible. And it's, it's, it's almost unfathomable. I want to, I feel in, I think all of us do a bit hesitant about doing anything else when something like this is going on. But I, I hope that some of the ideas here will have some positive reflection on events in the world and what we're, we're all trying to make sense of today. So there's many things to talk about. I want to encourage everyone online to make use of the, of the ability to enter your own, your own questions. And we're going to engage a pretty lively dialogue. We want this to be fun and interesting for everyone. So to jump off, Kate, as members of, you know, is a research professor at USC Annenberg and senior principal researcher at Microsoft. She's also the inaugural visiting chair for artificial intelligence and justice at a coal, no mile superior in, in Paris. And she's also leading an international working group on the foundations of machine learning. She's advised policymakers in the UN, the White House, the FTC, and the European parliament. She's received a number of prizes. And her latest book is Atlas of AI, power, politics, and the planetary costs of artificial intelligence, which is really our subject today. Our next, our first panelist is going to be Marianne Foucaud. She's a professor here at Berkeley of sociology and the director of the social science matrix. She received her PhD from Harvard University in 2000 and is an alumna of the coal, normal, superior in Paris, and I'm sure can pronounce it better than I am. She's also a comparative sociologist by training. And she's author of the book, economists and societies, discipline and profession in the United States, Britain and France 1890s to the 1990s. She's also published works exploring the national variations in neoliberal traditions, political mores and valuation cultures. Most recently, her research focus has shifted to the rise consolidation and social consequences of new classification regimes powered by digital data and algorithms. And she's got a forthcoming book on this called The Ordinal Society from Harvard University Press. Trevor Paglin, Trevor is a, we're proud that Trevor is an alum from Berkeley. He did his undergrad here. He has an MFA from the University of Chicago. And then he, he completed his PhD in geography here at Berkeley. He has gone on to be a renowned artist and geographer. He lives and works in New York City. He has numerous books and one person exhibitions. He's been featured in many major art shows around the world. He's launched his work into orbit. He's contributed to the Academy award-winning film, Citizen Four. And he's also created a radioactive public sculpture in the exclusion zone of Fukushima, Japan. He has also won the Electronic Frontier Foundation Pioneer Award and was named in 2017, a MacArthur fellow. And the last, the last panelists we have is, is Sonia Keitel, who is professor of law here at Berkeley. She focuses on the intersection of technology, intellectual property and civil rights. She also studies privacy and freedom of speech. Her current projects focus on artificial intelligence and discrimination. And she has been, she is the author of a forthcoming book on art, advertising and activism, and a previous book, Property Outlaws. She is, is, is, is also a, she has written a number of articles on these topics. And as you can see, we are, we are very fortunate to have three incredibly appropriate and highly skilled experts on the topics that intersect around Kate's lecture yesterday. So what I'd like to do is open it up for Trevor, or sorry, for Marianne to begin with a few remarks in response to Kate's talk, then Trevor, then Sonia, and then we'll start some dialogue between the panelists. And then soon we'll open it up to include your own questions. So please enter your questions at any time. Okay, Marianne, please take the stage. Hello, everyone. It's really a great and truly humbling pleasure to comment at this August event that is the Tanner Lecture. And I want to thank the Tanner Lecture Committee for inviting me to do so. As I was listening to Kate yesterday, I kept thinking that this was exactly what a Tanner Lecture should be. A text of breathtaking ambition, beautifully crafted, much like her book, a clear and sustained argument about AI's ground truth problem, richly illustrated with compelling examples and a forceful and flawless delivery. Ground truth, we understand refers to the human ambition to capture the truth on the ground, the world as it is. Kate yesterday gave three examples. Perspective in the 15th century, photography in the 19th and military aerial reconnaissance in the 20th. The equivalent today would be things like satellite images, LIDAR visual recognition algorithms. They too seek to capture a truth about the world out there. And they are pretty good at it. My phone opens when I and only I look at it. My car tells me how to go some place and rarely misses. My computer writes what I dictate in both French and English, despite my strong accent in the latter. So what is there to care about? Well, for one thing, and we heard that yesterday in the talk, there's a lot of ugliness hidden beneath the lines of computer code that make my life more frictionless today. In short, we have private appropriation, labor exploitation, natural destruction. Nothing very surprising here. That's capitalism for you. It delivers, but it stinks. We've known that since the industrial revolution. Another reason to care is that the ground truth of society is elusive, dangerous, and ugly. Elusive. A lot of stuff cannot be ground truth because there's no ground for it. There's no bijection, as Kate showed, between emotion and facial expression. Dangerous. A lot of stuff should not be ground truth because that ground is private. Ugly. All societies are people sorting machines, splitting and lumping machines. Any search for a social ground truth will find itself mirrored in value judgments, social hierarchies, and social exclusions. Unlike things, though, people do care about the way they are classified, measured and ranked. And so we've seen a lot of public and expert revulsion at the work that human facing algorithms do. But here's what I think Kate is telling us. The social ground truth created by scraping and labeling the internet is often uncomfortable, morally objectionable, or downright awful. But it is not necessarily inaccurate. Quite the contrary. The problem often comes from the fact that it is so true in the sense that it may be capturing the world as it is opinionated, prejudiced, violent, predatory, oppressive. Those are sinks that cannot be fixed iteratively. So rather than thinking about ground truth as data sets that must be de-biased, perhaps we should regard some of them as wholly illegitimate. So we heard yesterday there's no legitimate visual representation of a concept like that or function like a CEO. Other data sets maybe ought to be built anew in line with evolving conceptions of justice, identity and dignity. But I'd like to raise a puzzle. Thinking about ground truth as active political projects rather than as passive objects to be de-politicized around the edges may, in fact, return to an old promise of algorithms. In many instances, including credit, policy, insurance, a lot of algorithms were developed, in fact, on the claim, you know, claiming to bolster fairness in a context where demographic categorizations and open discrimination was the norm. Somehow we forgot that history. In our rush to analyze what's wrong with today's technologies, we've lost sight of what was terribly wrong with yesterday's. So one very honest question that I'd like to pose to Kate is, have we gained nothing? And if we have gained something, what can we learn from that earlier transition that we can bring to bear on the next one? The other comment, it's more common than a question really, is much broader. And here I am expanding well beyond visual classifications, which were the topic of yesterday's lecture. Kate said yesterday that we get the algorithms of our ground truth. She's right. But why do we get these algorithms in the first place? Why do we strive for this kind of ordering? From a sociological point of view, algorithms are allocative mechanisms that distribute people and things across various economic, social, physical, and temporal spaces in the context of scarce resources. To manage this allocative process, we have institutionalized the sorting and slotting of people into categories of deservingness, risk, desirability, and likeness. Perhaps this is where the heart of our problematic politics of algorithms lies in the first place. In fact, it may be in our lack of solidarity with each other, our absence of generosity with the most vulnerable. All of that makes the need to differentiate and to allocate so essential to the functioning of our basic social institution. And of course, I'm thinking especially in this country, where, you know, we do differentiate so much, right? So I'm thinking about healthcare, various forms of social insurance and so on. Perhaps what we need more than anything is a real socialization of risks and benefits so that people do not face the algorithm alone, or do not face the algorithm at all to core. Perhaps if we build more universal and supportive social institutions, we will lessen the need for algorithms altogether. And with it, our obsession with social hierarchy and difference. So in other words, we also get the algorithms because of the society we built facing that perhaps should be a political horizon. So I post that as a, you know, as a provocation for the conversation that is to follow. Wonderful. Thank you, Marianne. Kate, would you like to respond now or hold up until we hear from everyone? I'm happy to hold. Yeah, that would be great. Okay, great. Okay, so Trevor, if you would, you can, do you wish to share your screen? I'll share it in a moment. So I want to thank the other panelists, Sonia, who is a new friend who we were able to meet in person for the first time a few months ago. And Marianne, hopefully a future friend, and of course, can a very old friend and mentor from for a long time when I was at Berkeley, who continues to be incredibly inspiring to me. And most of all, of course, I want to thank you, Kate, for the really fantastic and illuminating talk yesterday. And I think we can all see why Kate is one of the most important voices out there thinking critically about technical systems and is doing a massive service to the field by excavating some of these blind spots, hidden assumptions, unimagined forms of ideology that are often built into what are sometimes conceived of as simply mathematical or apolitical machines. And I'm going to tell a bunch of stories, basically, but I'll start with a story. About five years ago, a big tech company, Facebook, called me up, and they asked me to come and talk to their art and AI group. And I thought, we're not sure, let's see what those guys are up to. And went on there, and we're all sitting around and they started showing me a demo of a system they were working on that that was going to generate what they called art. And they explained to me that art was partly built on tradition and partly built on creativity. And they thought, well, how do we simulate that? So what they had done was they built data sets of artworks and organize them into categories like cubism, abstract expressionism, pop art, and that they explained to me that was the tradition part. And then they were introducing randomness into the generator that they were building. And they said, well, randomness, that's, well, that's a proxy for creativity here. Needless to say, my jaw was on the floor. But not only for the inanity of the assumptions that were built into this project, but by the hubris in the room, quite honestly. And I think I was led out the door when I just kind of said, you guys know that cubism isn't real, right? There's no such thing as cubism. And so that leads to this kind of a little bit of a deeper dive that I wanted that I want to go into about a small comment that Kate made in her talk when she showed a slide yesterday of the Apple category in ImageNet. And Kate said, classifying apples not be particularly controversial. She had a caveat saying that this was actually much more complicated. And I know that Kate does not think that classifying apples is uncomplicated, because we have been talking about it nonstop for several years. And so I want to just talk a little bit about about them apples. So how about them apples? So if we include something like apples in a training set for computer vision, we make a few assumptions. One, we make the assumption that pictures point to the things that they represent. We make another assumption that the things that they point to actually exist. And in the past, I've referred to this set of assumptions in computer vision as a kind of machine realism. And I think that both of these assumptions are in fact incorrect. On the question of meaning, pictures of apples, historically and currently rarely concept, you know, rarely signify the concept of or reality of apples. Historically, they've signified things like health, abundance, sin, death, knowledge, love, and so on and so on. Often as you can see contradictory things. And those meanings are constantly changing. Now, there's this other assumption about essence that there is an apple. And of course, this is also an incorrect assumption. A few on the few occasions that I've been able to spend some time with biologists, I always like to ask the question of, of, do species exist? And then they get into this big conversation, they hem and they haunt say, Well, kind of not really, you know, it's complicated is the answer. And that's right. Because of course, it is complicated. We know this from Darwin, of course, which is that the living world is one of variation. And it's one of change that there's no ideal types of animals or plants or any life forms. Everything is a mutant. Everything is changing all the time. And so this brings us to the question of whether, given these conditions, is anything really imageable in a quantitative context? And when I started thinking about this question, I started thinking about, you know, I also have training as a geographer, and I was thinking a lot about what was called the quantitative revolution in geography. And this was a kind of crisis that happened in the 1950s and 60s. And before that, geographers would go out and you would describe a place and was considered a practice of description. And they all got really, you know, self hating in the 50s and 60s, they felt like they're not science enough. And so they started using all kinds of numbers and equations and statistics and they wanted like science up the field. And my PhD advisor at Berkeley was a really wonderful, fantastic man named Alan Pred. And he had he'd begun his career doing this quantified geography stuff. And he'd even won geography's version of the Nobel Prize for quantitative work on cities. But later in his life, he renounced all of his earlier work, he said was all bullshit. And he started writing these weird books that were montages of stories and images that were really inspired by Walter Benjamin's accounts of Paris. There was a funny moment in my PhD where I had finished my coursework, I had was preparing for my exams. And I realized I had done zero training in methodology at all. And I went into his office in a panic and I was like, dude, Alan, I've been taken how many classes here for how many years, I realized, there's this huge hole in my training, we have not done any methodology. And he basically looked at me as like, fantastic, you know, my plan is coming together. And I was like, what are you talking about? And he said, you know, he's like, I know all about methodologies as all these cookie cutter methodologies, you can use them and they're mostly recipes for bullshit. Because he said, what methodologies have, he said, you they have to arise from the materials that you're looking at. He said, look at your things, look at it again, look at it over and over and over. And it will gradually just tell you how it wants to be studied, for lack of a better word. And that is kind of always the way that that that I've worked, I guess, and this obviously this method is very much inspired the, I guess lack of a better word methodology than me and Kate have been trying to develop over several years of excavating these kinds of training data. And then this brings us back to this question of apples, this question of training data and computer vision in particular. And, you know, I've really spent years racking my brain about these apples and have tried to imagine in what situations this kind of computer vision classification could be useful. And the best answer that I've really been able to come up with is quality assurance in industrial agriculture, right? So you try and try and build systems that automate the selection of apples according to whatever criteria that you want them to have. This is something you would do at a huge industrial scale. This is not something that they would do at the farmer's market, for example. And so the point that I'm getting at is that maybe the economic and industrial contexts determine what is sensible within the system. And this leads us to a pretty remarkable conclusion or at least an idea, which is that there's a kind of a political economy of meaning making going on here, that the question of what a picture means is a function of the industrial context that it's in. Now, there are some precedents for this idea, most notably the work of the former Berkeley art historian, Michael Baxendal, who came to a similar conclusion in this survey of Italian Renaissance painting, and want to give a shout out to Leonardo Impet for making this connection to Baxendal's philosophy of art. So I think this provides us with an opportunity to add another category to Kate's taxonomies of the unstable ground truths on which much machine learning is built. We can add a category like the economic ground, where we think about the political, economic and industrial conditions that encourage certain forms of data and meaning making and strongly discourage others. And I want to conclude by seconding Kate's conclusion in her talk yesterday, where she called for bringing new kinds of imagination to the field. She called for us to step back as far as we can question as many of our assumptions as we can and think big and to think truly big about what we want the world to look like, because the tools that we create will, of course, make that world. And I think that there is another side to that call for imagination as well, which is something about a call for humility, a kind of a humility in the in the culture of machine learning, a sense that maybe machine learning isn't the solution to every problem, and that maybe it's not even a solution to very many problems at all, and that maybe we shouldn't be too confident in our own abilities to make those judgments. Thank you. Thank you, Trevor. Thank you very much. And I can't resist your setting of the apple as a as a as an exemplar. Of course, there's the the the apple on my on my table right here, that we're all taking a bite out of every day, and its implications as a as an icon of Western computing and technology, and still last pretty well in regard to its original choice of regard to Adam and Eve. Okay, Sonya, please. We welcome your input on this. Great. Thanks so much. It's just such a pleasure to be here. And I'm just so happy to be sharing the space with so many people whose work I admire so much. I guess I want to return to something that Jennifer Chase said yesterday in response to your talk, which I just thought was really insightful, which is your talk, which was so wonderful and brilliant and insightful just was really literally like the best argument for why sort of institutions of higher education need to think about how to integrate notions of society questions of social construction with ideas of data science. And and the way that you have done that in terms of your own book and the mapping of that is just it's just exemplary. It's remarkable. And it gives us all a bunch of different maps, right, in terms of thinking about how to use those tools and thinking about AI. I think that that the value to me of your talk was just in the range, right? Like so I've been thinking about social construction for a long time, but the way that you applied it to data data science, the way that you applied it to questions of materiality was just so incredibly powerful. But I found myself kind of coming back to a place that is familiar to me and and the place that I come to which is familiar is kind of what do we what do we do knowing the kinds of things that we now know about the notion of ground truth? What do we do after that excavation? And so I really was struck, I definitely, you know, I'm sure that you have many responses to both Marion and Trevor, but I would love to hear about a little bit more about the the the point that you ended on, which is the notion of material ethics and the idea that we can use, you know, larger conversations about ethics and responsibility and social responsibility and integrate them into questioning the assumptions of AI as Trevor points out. And I guess the question that I sort of am left with, right, is I want to be an optimist. I'm like a diehard optimist. I think the things that we see in the world and the way in which you know, so many people and so many scholars have kind of elucidated for us how social construction has operated. And now your talk presents us with the idea of social construction operating on a grand, massive industrial, infrastructural level, in terms of the products that go from the social that are developed from the social construction of AI. And so I guess I just really wanted to open up a conversation first about the idea of ethics and how we can think about integrating the idea of ethics into into AI if it's even possible or whether the solution is as you know, Marion had talked about, you know, do we abandon it entirely? What do we do? And the other thing that I just really am thinking about is particularly because Trevor, you know, I'm a huge fan of your work and I've followed your work for years. I think that there really is a very powerful role for, you know, the title of this of this panel art and activism here in terms of both obfuscating some of the issues that AI creates, but also educating the public in ways that I think are really important to reach the public. Those of us that are in versed in law, policy, sociology, we've been thinking about social construction for a long time. But I think that when you're the work that you've done with Trevor, when you see it on such a direct level, it really does kind of raise the possibility of kind of thinking about different ways to educate the public more deeply and more visually about the risks and the dangers of AI. Excellent. Can I can I add a little bit, since Sonia did mention what I had said yesterday, I really, you know, we are thinking about educating tens of thousands of students at Berkeley, but many more than that across the world as we bring them these tools as we teach them these tools of data science and AI. And we've thought here very deliberately about how we weave ethical considerations into this. How, you know, there are four components to our data science major computing, statistics, human context and ethics and a disciplinary emphasis. And I believe that as we train people at every point, we should be almost as if teaching them how to ethically hack what they are building. Right. What are the vulnerabilities that what we are creating leaves? How might people exploit those vulnerabilities? When we create power, we know that people will try to usurp that power. And so in all that I have heard yesterday and today about the non absolutism of ground truth about the way in which we approach these societal issues. What are the legal frameworks? What are the normative frameworks, the moral frameworks, the artistic frameworks around this? I would love to hear from you, Kate, and you and I have, you know, worked together for a dozen years, worked in each other's ordnance for a dozen years. How you might choose to educate students who are entering college, who are embarking on learning some of this, how to teach them how to be better members of a civil society that is permeated with AI. Great. Thank you, Jennifer. So actually, that's a perfect setup. If you would, Kate, maybe you can take that question first and take these in reverse order, because that's very much connected with Sonia's question. So, Kate. Thank you so much, Ken. I want to begin by just expressing enormous gratitude to the panelists today. One of the extraordinary gifts of presenting a Tanner lecture is that you also get to suggest the people who have inspired you and whose work has been such an influence. So this really is the group of people who I owe an enormous debt to in terms of the work that you've done and what you've given the world that you're all inspirations to me, intellectually, artistically and creatively. It's an extraordinary group. So I want to start there. And I also want to just riff gently on Trevor's provocation to the machine learning world to ground itself in humility. I think this is also an experience for me of being deeply humbled because of the just extraordinary things that you've said and contributions and critiques and expansions of these ideas, which are just so generative. So let's start there. My tendency is because of the way the questions have been ordered, is actually to invert, if I may, Ken, to start with Marion and actually end with Jennifer's questions, because I feel like in some ways they actually follow from each other and complicate each other because there are no easy answers here and particularly ending on pedagogy, I think, will help us see why these complications also are important to how we think about educating in this space. So I want to begin with Marion's just spectacular provocation around this idea of, you know, have we gained nothing from this sort of expansion of algorithmic logics and conveniences into our lives and our institutions and our publics? And I think the first response I would have to say there is who is we? You know, who is the we who gains? Because I think that immediately reveals that so many of these systems have given so much depending on your positionality, your access to privilege and your access to other forms of material benefits. And this is where your work, Marion, has been so extraordinarily important in terms of thinking about the ways in which scoring systems are feeding into particular types of market logics that have radically expanded just in the last 100 years. And for me, I think when I think about the way in which your work has pointed to this shifting away of socialized forms of risk, so we can think about insurance and the way that we've moved away from this idea of insurance, ensuring groups of people to a hyper individualized risk profile that as you say in your work is it's all about you has these two effects, this sort of twin impacts. One is that we lose that solidarity in the face of the algorithm because people have been so so closely monitored that, you know, how much you exercise and what you eat and what you buy is suddenly reflected back in a social score about you. But it has this other terrifying impact, which is that it has taken away from that sense of that there are group and structural and historical forms of privilege and risk that are not being modulated by those forms of algorithmic scoring. And I think this is also where when we start to look at this question of risk that we need to sort of flip the lens on how we think about AI as a form of capital. And I'm thinking here also of your work with Kieran Healy in thinking about what you called, I think, very provocatively, not just Uber capital, but Eigen capital that we have these ways in which all of these forms of data, not just, you know, what we would understand is within the credit system, but everything from who is in a social network to the things that we say online to our Instagram photos have become part of this sort of credit modulating system that is constantly shifting and extremely difficult to track. So here, you know, for me, that my reverse provocation to you, Marion, would be to think about the ways in which AI operates as capital in almost two ways. I mean, I think there's an obvious way in which, yes, it functions as a type of capital in the sense that it is a process of circulation about extracting value. But in this other sense that it's also an expansionist mechanism for forming kind of what Marx called primitive accumulation or David Harvey would call accumulation by dispossession, you know, this expansion of algorithms into spaces that capital did not previously have access to this sort of capture of the commons. And I think there we have to ask, you know, we have actually lost rather than gained because the last things that we're seeing as as public commons have have everything from how we sleep to, you know, how we move in public space have indeed become part of that algorithmic modulation. Trevor, to turn to you, extraordinary to in the sense that I know we keep talking about this question around why is it then we look at sort of classification of the world versus classification of humans that it's so it's so commonly assumed to be fine that the minute we look at social classification, of course, these harms are very evident to us. But what is it when we classify an apple and how are those sorts of decisions being made? And I think I wanted to open up this question of image ability and particularly this idea of image ability as a quantitative strategy. As you and I both know, after our work on ImageNet, the ImageNet creators have released a new version of ImageNet, which includes a new image ability score in response to critiques that there were so many ideas embedded in this data set that were inherently not visual from light to data to, you know, moral judgments that we've discussed and opening up that category of image ability is so extraordinary because when you when you look at how things have been ranked, it is again opening up this just world of illogic so that the idea of a grandmother is given an image ability score of five. What is it to be able to look at someone and know that their children have had children? Again, this is not an imageable concept. So I think, again, this brings us back to the idea of a political economy of meaning making. And I think you're absolutely right to say that this is where this work is pushing towards because that is precisely why things are being given an image ability score. It is because there are particular forms of profit to be gained and it also feeds into particular types of often very militarized logics. So to answer your question of when is an apple more than just an apple? I would also say, you know, every time you have a photograph of yourself with an apple in Instagram, that's being scraped and understood as a score of your health that your insurance should actually recognize your good practices as opposed to if you have a photo holding a doughnut. So indeed the apple itself has become an algorithmic modifier. And Sonia, such a pleasure. Your work again has been just so important in thinking about these questions and I want to particularly note your recent work on the gender panopticon in terms of the way in which these systems are creating extraordinarily sort of heteronormative and exclusionary logics that particularly for trans and on binary people are constantly being sort of written out and made invisible. And you know, you've probably asked, you know, one of the toughest questions, you know, which is what do we do and how might we answer that call with a material ethics? I want to be honest with you. I I've had enormous difficulty with the way in which the term ethics has been in some ways adopted in some ways defanged and in other ways used as a form of ethics washing, particularly in in tech writ large. But you could you could sort of look to many industries that I think have used and abused this term. And interestingly, you know, I've never applied it to myself. But it is at this point now where I do think we have to talk about materiality in different ways and grounded in an ethic. But you will notice that I'm using the concept of a material ethics of AI, not AI ethics. I think there's a very important distinction to be made there. We're talking about an ethics of this broader world, where we have to start thinking about these core questions around justice and sustainability. So I think in that sense, there's a hard conversation to be had about where that word serves and does not serve the broader political projects of justice in relation to these systems. And Jennifer, I wish I had an easy answer for you as to what the pedagogical responses should be to how we might want to change the way that computer science and specifically machine learning is being taught today. Only I would say that I think it does need to be substantially changed. Now, you have spoken powerfully to forms of interdisciplinarity. And the question that that will always provoke and it's a good question is which disciplines get to sit at that table and who gets to have a say in how those systems will be built. So it is very common and I've unfortunately been in many of the rooms where, you know, I've heard a sense of interdisciplinarity as being described as having a computer scientist and a statistician and maybe somebody who is a bioethicist. This is absolutely not enough and it is already such an attenuation of even the early histories of AI in the 1960s where we saw anthropologists. We saw sociologists. We saw art historians coming to the table and actually being part of those conversations. So again, I would say that sometimes interdisciplinarity can sound like too easy a solution because I think it's often a very narrow pool on whose expertise counts. I think it also means that we have to ask the question of how do we think about accountability to the communities on whom these systems are frequently foisted on from above. And that is one of the hardest questions because it's not easy. But it is actually, I think part of the question that we have to face that the sort of exclusion of communities from the process of designing systems has brought about so many harms, particularly in the last few years. But I think you can look at decades of palms here as well. So how, you know, if there's a core question that you want to ask people in terms of teaching their systems, I would say question them. The first thing that we can do is to question them far more radically than we have and to think about their impacts in a far more complex and long-term way. But thank you so much for these provocations. And I look forward to our discussion. Terrific. Thank you, Kate. Excellent. OK, so let me turn back to the panelists to respond and turn to Kate. And then we'll go from there. So Marion, you first. Well, thank you, Kate. And also thank you for the many references I didn't expect to hear. I can capital in this conversation. I, you know, so I'd like to go back to two really important points that that you made. One is who is the we? A lot of these systems, you know, were developed and continue to be there. Actually, we have this incredibly, you know, flourishing and incredibly powerful language of inclusion, right, digital inclusion, financial inclusion through digitization and so on. So we have that in the developing world, we have this towards the poorest, most vulnerable segment of the population. And that's been a huge argument for for I'm thinking particularly, of course, of the, you know, credits in general. But I think it is really important to try to understand that. And if you think about what's happened to finance, you know, you've had financial inclusion, the fact is it has happened, right? Now, the question of whether it is desirable or not, this is another question, which I think we should treat separately. But we've had, you know, that expansion, we used to be that, you know, women couldn't get along without their husband signing for it. We see that black people couldn't borrow, you know, because of the neighborhoods that they lived in and so on. So, you know, it some of this has persisted. But, you know, there's been a transformation. And I think it's important to talk about transformation. Now, what we get instead is, you know, we get a sort of recomposition, you know, reorganization, right? Where new kinds of inequalities are emerging and the inequalities are emerging. I mean, of course, the old ones are still there, you know, you very eloquently mentioned, but we also have sort of new forms of stratification that are emerging, that are sort of native to the digital economy. And I think that's, you know, that's one of the, you know, one of the things that my work with Kiri is trying to show is that, you know, maybe it matters more to your life today, whether, you know, if you have a credit score below, I don't know, 600 or something, maybe that's, you know, something that really organizes a lot of the access that you get to basic services and, you know, housing and insurance and maybe a job and so on. So that's one thing that I think is important. The other thing that I think is you said, and that's very important, and it goes back to a point that that Trevor was making, which is that digitization and the development of machine learning is very much a process that is associated with financialization. Right. So it's really about the financialization of everything and the collateralization of everything for, you know, financial purposes. And we see that continuing today with like things that are happening, the NFTs and even, you know, collateralizing everything about your person, right? So, you know, the process is still ongoing, right? And so, you know, the political economy of these systems is very much about the rise of finance. And so that's sort of the broader context in which these, you know, these things are happening. Yeah. Great, Trevor. Cool. I mean, I just have random thoughts, I guess, at this point as a response, you know, I with this question of justice, you know, so many times as I'm seeing these arguments about the attempts to try to create fairness in algorithms or whatever, I'm kind of constantly that this conversation has happened before and it's happened in prison sentencing, right? So in the, you know, back in the day in the sixties and before that you were sent into a crime, you get an indeterminate sentence, like three to seven years or something like that. And then you have to go for a judge and they decide whether how long you would serve, right? And what ended up happening was white people would do three years and black people would do seven years, right? So it had this very racist outcome. And the response to that was well-meaning liberal people were like, well, we should have truth in sentencing, right? You should get a sentence and that should be it and that should, that'll fix the whole problem. Well, guess what? That didn't fix the problem, you know, that, you know, it created a situation where everybody got seven year sentences. You had a massive explosion in the prisons and it didn't become any less racist at all. Kind of quite the opposite. It also created the opportunity for sentence enhancements and for it created a situation whereby the lever that you could pull was the number of that sentence. And so it would continually go up and up and up and up. No. So I'm just using that as a kind of a historical reminder that there are deeper problems and how to address those deeper problems is hard. It's really, really hard, you know. And so in terms of thinking about the interdisciplinary conversation in machine learning, I think there's something to learn from that in terms of not only thinking about who is in the room, but I guess just thinking like in a deep way about historical context and like context that are perhaps even deeper than the then the framework in which you're posing the question are right about the ethics and I think it's it's really funny. I hadn't looked at the race papers recently, but in preparation for your talk, I went back into our people still doing this and the answer is yes. With the caveat that this is to start blowing my mind that in the introduction of these papers now it's like, oh, and we have to recognize race as a historical construct and it's coming out of colonialism. So let's get busy building race classifiers. I'm like, what the hell is going on here? Like, so, you know, how has this sense or awareness that maybe there should be ethics and AI get stated, but then the same shit is done on the back end. So it's just it's just that. And I guess I kind of had a question for you, Ken. I mean, you work in labs and I don't. Obviously, I'm some artist. I just look at me like, oh, that's weird. You know, but I guess the question that I have is is basic research possible, you know, given all of the things that we're talking about here. You know, I mean, that is a very serious question and kind of what is that in your mind and in your pedagogy? What is that distinction between what is? I guess what it would be akin to basic research and physics versus, you know, sociological tinkering or what have you? Well, thanks, Trevor. I actually I appreciate that because I've been sort of wanting to jump in a little bit. And what I want to note is, you know, I was struck by the term Eigen capital and it's very interesting because that is a references this idea of eigenvectors and principal component analysis. And this was really the root of a very early classification and that very much in line with what Marianne said about algorithms allocate. And it was with the terrible history of the IQ, right? And the idea of the IQ was to use dimensionality reduction to find these dimensions and be able to score people along those with the implicit assumption that there was one sort of metric of intelligence. There was this singular idea that everyone could be ranked on a scale, a linear scale of intelligence. And so that was really, you know, a huge distortion was used by the military, obviously, to rank recruits and such. It's interesting because we did a project a few years ago where we tried to use principal component analysis to analyze patterns of ratings of jokes. And we had a number of jokes and we had people rating them. And so we applied principal component analysis to project it down onto a plane and I was hoping that we would find certain clusters, in other words, senses of humor. And in that same way, what was very frustrating and ultimately more interesting, revealing was that everybody was all over the place. It looked like a huge nighttime sky of consolation. There weren't really many clusters. And so humor was just hugely spread out. And I think this is very interesting in regard to Kate, your comment about Ekman. And I do think that in terms of fairness, I do think there's a little more to his analysis. He, I think the idea of the six emotions are sort of acknowledged as a kind of very, very simplified to try to take on some principal components of these dimensions. But with recognition it does by no means to mean exhaustive or exclusive. But I think that this idea of history and understanding where these tools come from is extremely important and very much absent in most of the training for scientists and engineers. And I see Jennifer nodding here. And so what I keep coming back to is this huge gulf that continues to exist between the two cultures. And we have it here in, you know, as far advanced as I feel Berkeley is in so many ways that gulf continues to persist. And so you have a huge misunderstandings. And last week, for example, I was in a discussion and there was a number of colleagues talking about critical race theory. And they brought up Popper in order to say, you know, this is not a theory. This is, you know, there's no refutability. Yes. And it was really interesting because they didn't have any real sense of critical theory, what it meant and how there can be sociological and philosophical theory. And they only in their mind, the only theory was mathematical theory. So that persists and it's still very alive and thriving. And so I think the key is that there's a very strong cultural gap that's going on. And that's what I think is so interesting about what you, Kate and Marion and Sonia are doing, which is you're excavating by coming into this other world and really diving into it deeply. You're revealing things that the engineers are not realizing themselves and showing them. Now, they don't like what you see this, right? And you've seen the reactions. The point is though that, you know, I think that there's something very deep going on here and it's very, it's age-old in terms of looking back at history. So to your points about how do we, you know, what is the ethics? How do we, what is normative? How can we sort of learn from this and move forward in terms of the pedagogy? I really, I think one thing is understanding history to the degree that our students can really cure these stories and know about these backgrounds and about the history. For example, I learned a number of things about speech recognition that I never knew before. And that guy from, what was he, who was, who was he that he was part of the secret? He was one of the speech-recognized guys, but he was a shady character and he was part of what? Robert Mercer, perhaps the reclusive billionaire. Yes. Yes, yes. I never knew that whole story. So that has to be taught in our data science class. We need to understand that thing. And so, and you're, and I have to say also as an artist, I think you're, what you did with the piece on ImageNet, what you did was you took it and you made it, you brought it very vividly back to the moment, the present moment, and you showed it operating on ourselves because we went to that page and you took a picture of us and then it categorized us. And that was so gripping and compelling. So I, you know, I salute you because it was a hugely brilliant idea. And rather than just sort of giving examples in the abstract, you use our own faces to, to exemplify. And that kind of work is exactly what I think is needed is a very vivid, interesting stories that really tell us and show us what, where are these histories, where these nuances lie? So I want, I'm curious how others feel about this, this idea of using art as a form of activism to alert and essentially educate the broader public and the scientists and engineers of this world. Can I jump in here, Ken, for a second? You know, it's funny because we talk about kind of math and technical and in fact, I'm a mathematician at heart and math is beautiful. Math, I mean, when I look at a proof, sometimes I can tell who wrote that proof. In the same way, when I walk through a museum, there's a piece of art I haven't seen but I know whose art that is. And so there is a beauty in mathematics. And when you look at the evolution of computer science, part of it came out of mathematics, part of it came out of engineering, part of it came out of trying to find the perfect, the perfect explanation, the perfect elucidation and part of it came out of trying to solve a problem. And both are legitimate, okay? But we have to bring them back together now. We have to bring them back together. And I feel with this new college that we are forming that that is what we are doing. And there is in many, many people I've seen go into computer science in the last couple of decades, are people who would have gone into mathematics, who are searching for the beautiful, who are asking questions and searching for answers in the same way that a philosopher would or honestly, I think in the same way an artist would. It's not everything, but it is the essence of something. And so I believe that there will be a resonance with people in the technical community if we highlight that beauty, if we highlight that attempt to understand, you know, what is just and what is unjust. And it's not simple, of course. There's not one answer to that, okay? But there is a part of this poll between us between complexifying and simplifying that is beauty, that is what art does. Art trades off between these polls of complexifying and simplifying and bringing the exemplar. And so I do believe that there is space for this. Kate knows we tried to build this at Microsoft Research. I did hire philosophers and anthropologists. And I think it is what we are building here. We are trying to re-initiate this. And I also wanna go back to what Kate said about questions because for the people who are not going into the field, I do not believe that you can be a member of a civil society without having what you need to question all these data-derived conclusions and decisions that come your way. And for the people who are gonna be the practitioners in this field, they must question all the time, what are they attempting to achieve who is served and who is not served by that? So I just, I don't believe the gulf is as wide as you think. I think we're bringing things back together and that's what we have to achieve here. Just my two cents. Yeah, one thing that I just wanna jump in, just as a lawyer, like one of the things that I often struggle with is the fact that so much of the history of our civil rights movement is premised on the social construction of identity categories, right? And so much of the stuff that we learn and critical race theory and all of these different things, I think are premised on the notion that there are identities that are constructed and these identities can be deconstructed. And I think where I sort of come out, and maybe this is because I am a lawyer and I do believe in trying to figure out a solution to a problem, which is partly why I think AI is so interesting to me, because there really is no solution, is that there's a myriad of different ways that you can approach this, right? So some people want more representation in data sets and other people want us to question the very assumptions that are built into those data sets and still others would want us to abandon those systems. And I think that this is where your point, Kate, about listening to the communities that are being targeted by these systems is really important, right? Because not every community is gonna have one answer and it's important for us, this is the project of democracy, right? And it is the project of the kind of work that is now being done by companies in ways that I think often overlooks the complexity of these categories. So these conversations have to happen in order for them to be effective. But at the same time, we also have to recognize that as Trevor's point about the sentencing guidelines shows us, there are times where you really wanna question the efficacy of a system and you wanna think about its impacts on particular groups, but there's not one answer here. And I think that's kind of one of the nice things about your book is that it provides us with a whole variety of different lenses to think about. Sonia, can I just thank you so much for particularly bringing in that perspective because I was having flashbacks to being in law school and thinking about critical race theory when it was a thing that you learned in law school and it is an actual body of thinking and thought and to see the way that it has been so sort of horrifically turned into this political football for all of the worst reasons, reminding us that knowledge is itself this way of trying to create political force and political power. But the thing that I also really would love to hear from you because you've also thought about this in relation to art is this, I think for me, undercurrent which we haven't quite yet touched on which is so important to me in the role of art which is more than its role in aesthetics and beauty but actually it's role in politics and the way in which art can teach us actually something around the politics of refusal something that I write about in Atlas but it also was a topic of a great conference at Berkeley just a couple of years ago. And for me, the core question and the democratic question in the heart of what you've just told us which is absolutely right that people will have very different responses to their desire to be visible their desire to be present in data collection in systems of scoring and classification is where is the ability to say no? We have absolutely written that out of so many of these systems for so long and I think it's a fiction at this point this idea that you can, oh, well, I'm you don't use Facebook or you don't have a phone or this idea that somehow now I'm out, you're not you have been absolutely ingested in every way and precisely that sort of capture of the commons. So for me, and again, this actually brings us back to the question you posed at the beginning what is the role of art here? And I've learned so much from collaborating with artists like Trevor, like Ludan, like Heather, Drew Hagborg, many others around how do you actually make this a public conversation where people first of all can see these systems can see how they work, can see how they fail can see how in many ways they're just so basic and in many cases folding in ideas that I think are genuinely unscientific and genuinely suspect. And then how do we make that public conversation lead into a genuine politics of refusal to say that these systems can be used well in some circumscribed domains but are absolutely inappropriate in others. So for me, that is such an important thing for us to center on as well. Good, good, good. So, you know, one thing I want to say is I think that it's really helpful to, to, you know, sometimes the conversation becomes a little bit polemical in the sense that I think the engineer, speaking in West One, with my engineer hat on today that, you know, we can feel a little bit, you know, that there's a condemnation, a sense of condemnation. And I think it's also important that, you know, we also both reflect in our own, you know, in our own disciplines, in our own thinking about these things that examine our own assumptions. I think that the humanities and social sciences are often doing that. And I take that as a great inspiration, how there is a self critique and mutual critique going on there. And so I think that from the engineer's perspective, they will also appreciate that kind of self, you know, questioning in the context of these, of these discussions, because, you know, they're not, you know, I think sometimes they're sort of this idea, they're monstrously, you know, diabolical. And, you know, I think, as Jennifer said, that they're oftentimes very, very well-meaning and intentioned and in some sense, unfamiliar with these deeper issues in history. So they're enamored by the beauty of the algorithm, the mathematics that, you know, what it can do. And so, you know, not that they're doing this out of a malevolent intent, but how, so how can we sort of peel that apart and say, here, let's take this rather than sort of condemn it, but let's, you know, change it and examine it in a way that we can all benefit. Because I think that's a way, I would say, I would offer that's a way to reach my colleagues in a good way, in a sense of more engagement. Does that make sense? I think it makes a ton of sense. And I think in terms of this broader question about, so I first of all completely agree with you. And I also think social scientists and humanities people need to look at technology more and look at engineering more. And there's been plenty of art historians that I've gone to who said like, you should be looking at computer vision. They're like, oh, I don't do technology. I'm like, it isn't just technology, there's all this other stuff going on in there too. There's images, you know, there's ways of interpretation. There's all stuff that you know how to do, but to, we got to look at what the society is now. So we got to look at this broader context. I think that to bring it back to Jennifer's provocation about pedagogy, I think that's really hard. You know, like I don't like, I literally have like terminal degrees in art and social science. I can't have that level of expertise on the technology and engineering side as well. So in terms of how do you create a pedagogical platform that is much more blended and with critical thinking and technology is a really hard question and a really urgent one. But yeah, I would love to hear people's thoughts on that. It's something that's very challenging, I think. There's a point that I was trying to make in my comments, which is perhaps the point that people often lose, you know, because they are thinking about sort of how to change design and how to transform design. But you know, we should always ask ourselves the question, what are the social and political conditions under which the need for algorithm arises, right? And I was thinking actually, perhaps it was not clear, but I was thinking of the book by Virginia Eubanks, which is on algorithms used for, you know, to the management of poverty. In a universal health care system, you don't need an algorithm to decide who is eligible, right? In a system where you have a lot of public housing, you don't need an algorithm to decide who's gonna get that apartment, right? So you have to have that sort of broader conversation about what kind of society do you want to live in, right? And what kind of solidarities do you want to build so that the need to differentiate between those who are deserving and deserving doesn't arise. So it doesn't solve all the, you know, there are plenty of other algorithms that will be necessary, but at least that, you know, and I think that's a very important conversation to have, not simply including in places like engineering, right? Because it's about, you know, the algorithm is just coming on top of a system that is fundamentally problematic, right? So that's the point that I think we can make, yeah. Mary-Anne, when you say that the technical part of me immediately goes to trying to put in something, I'm sorry, because that's what I try to do. Somebody else might try to capture it visually and I'm, but it goes to how would I put the impact of this decision? How would I put the impact of this decision? We do not, I think, in our algorithms take into account the impact of the decision, but that could be, and it's conversations like this that would enable it, right? It's talking, it's algorithms, people talking to people like, we're not gonna fundamentally change the way this society allocates or doesn't allocate resources by you and I having a discussion. But we can try to quantify in the algorithm what is the impact of these decisions? There are decisions that are huge, there are bail decisions, there are housing decisions, there are loan decisions, there are medical decisions and there are decisions that are of lesser import. And somehow we use the same set of criteria on all of them in many ways. We don't, I mean, this should be one of the first things that we teach students. We should have a way of looking at this and saying, what are the impacts of these decisions? So these are the kinds of questions that we need to be posing. And by the way, a few of you talked about history and that piece of our data science major is historical context and ethics. So, the historical context is incredibly important in this. So also, what is the impact of it and how has that played out in the past? How have those kinds of decisions played out in the past? So I think it is essential that we be having these conversations. We are, honestly, I know I keep coming back to this and I sound like, but we are building this college to enable these conversations to be able to bring the relevant researchers together, but also for the students to feel it. Thank you, Jennifer. So let me get back to Kate. So, Kate, can you comment on this point because we're talking about, I love Marion's phrase, algorithms allocate. And so that algorithms are almost by definition about making difference. And Marion, to my ear, is making a call to unification of smoothing differences. And yet differences are important. We know that and that we want individualism and want the freedom to be able to differentiate ourselves from the norm, from the comments. How do you feel about this thing? Because in fact, one of the things we're just even talking about is the differences between disciplines. And I'm saying, I'm making another difference calling out these differences between scientists and artists, et cetera. So are we guilty of these same things? And how do you feel about this? And I think you said something incredibly important that is sometimes getting lost, I think, in this sort of debate, specifically around AI's wider impacts, which is, how are we going to build coalitions to change the way things are done? And I think you're right that there's this tendency to become a kind of engineers versus everybody else, which is incredibly dangerous. And I think not necessary because so many of the points around how do we think about social scoring in terms of how do we think about allocation have also been made by engineers. I'm thinking here of some of the sort of founding figures in AI, people like Joseph Weisenbaum, who are engineers and political thinkers. Ursula Franklin, who was a physicist and wrote some of the most extraordinary work thinking about the wider implications of what we build. So what I'm seeing also, I have to say in the new generation of students who are currently undergraduates and early-level graduates in engineering and computer science is that they are asking these questions. They are not feeling as though they're the ones who are just creating systems and everybody else is doing the kind of critical political work. There is a sea change coming and it is very much built into what is already happening with the students who are joining us on campuses. So that to me is incredibly exciting. But I think Marion is pointing to this other question, which is how do we think about why it is that algorithms are brought in to address fundamental broken social systems? And Marion, I mean, you're speaking to my heart here as somebody who again is an immigrant to the United States but is raised in a universal healthcare system to see the differences of what happens without that and the way that that has so profoundly skewed and broken the American healthcare system and the way that algorithms being brought in as though this is gonna be a way we can reallocate. It's horrifying because again, we're building on the shifting sands of data that is telling us these stories of inequality. So you've really for me pointed to something that is possibly the most important part of the project as I see it going forward, which is that we can no longer look to these questions as how do we tinker with an algorithm to make it more fair? How can we carve out some minor privacy protection from a sense of inevitable expansion of all things into sort of machine learning terrains? I think the much deeper question is this one, this idea of how do we want to live and where are our existing social systems falling apart? These infrastructural questions that you've pointed to. And for me, that's part of that is actually de-centering technology altogether. That so many of these questions are actually not technological questions and should not even be put in that domain and should in fact be social questions first and then and only then should we be thinking about whether or not there's a technical application. That is a wonderful note to conclude on, Kate. I think that is perfectly sums up what this discussion more broadly and to put it into the context of some bigger conundrum that we have been struggling with as a culture that we struggle with today with the state of the world. And what's happening is that we want to draw differences, it's instinctive, that quest for knowledge and is built into us. And so we are going to want to understand to tease things apart. And at the same time, there can be incredibly dangerous and there is incredible consequences. So I just want to say it's a final statement here. Thank you all for a wonderful panel. Perfect finish again. That is just, yes, absolutely.