 Ladies and gentlemen, please put your hands together for our moderator for today's future conversation. News anchor and host of KQED's morning edition, Brian Watt. Hello. I just want to say that video gave me hope. In my career, I have gone for the work that draws on my human tools. Before getting into public radio, I was an actor, woefully underemployed actor in Hollywood. And in acting, it was my whole body, my voice, my ability to listen, to develop relationships on the spot, and develop the right tone in the moment. You take out my body, which I'm glad to do at this age, and you sub in an ability to write with all those other human tools, and you've got what I draw on in my work in public radio for KQED every morning. I consider this human intelligence, and I don't want to believe that something artificial, algorithm-generated, can do better than me or take advantage of me. But I am usually wrong. In fact, I know I'm wrong this time. So the chance to have this discussion is good for me and anybody else who thinks that artificial intelligence is not coming for them. So three women are here to set us straight, and we're going to welcome them right now. They're not just going to set us straight. They're going to give us some confidence. They're going to tell us what we need to know. I'm in this with you, they're the experts, but I want to welcome our panelists to talk about AI, Dr. Safia Noble, an internet study scholar. She is a professor at UCLA and the faculty director of the Center on Race and Digital Justice. Dr. Latanya Sweeney, a professor at the Harvard Kennedy School. She's a founder. She's the founder and director of Harvard's Public Interest Tech Lab. Tech Lab. It's late in my day. I'm a morning anchor. Katie Knight, president and executive director of the Segal Family Endowment. And I'm going to join them. Wonderful. So I think we should start with the very basic for people like me. We hear conversations. We hear reports in public media about the coming of AI. But really, what is AI? That's the first question. What is AI and what is it not? Let's get into that quick so that we're all on the same page when we start. I'll jump in and you guys can feel free to hop in whenever you want. I think the good way to think about what is AI today is we all sort of experience a sort of baby version if you have one of these sort of smartphones and you're sending a text message and it tries to predict the next word you're going to write. It's literally taking a statistical model of all the text messages of all the people and just sort of saying what's the likely next word and then it homes in on the statistics of your own text messages to predict what's likely to be the next word you want. So that's kind of the basics of it, except now instead of a text message, imagine it's a paper you want to write or an essay or you just tell it a simple description and it writes the rest. One of my students last spring asked ChatGPT as an example of one of these language models to write a paper by Latanya Sweeney. That's all she wrote in the prompt. It wrote a full research paper. It complete with citations and references and the numerical analysis was statistically significant. The only problem is it's a study I never did and none of the references are real. But it was pretty amazing. It read like a paper, it was coherent and went on, went about it. So in that one example, I think we started to get a good glimpse of what's good and what's bad. Here, instead of learning on all the text messages or homing in on my text messages, when we think about large language models today, they learned on the internet. That means it's one of the most racist, sexist, vile things you could imagine. But on the other hand, it's tons of information, so it's really good at generating language. Like you can ask it to write a rap song or you can ask it for to write a paper and give it an author's name and it'll use their style on any topic you describe. That's pretty amazing. On the other hand, it's vile, so they have a program that tries to keep that from coming out. So sometimes it'll say, I'm sorry, I can't answer that. But a lot of times we still find many ways where it still comes through. So my students had a lot of fun last year having to show its colors. And it would say things like, oh, she did pretty well for a girl. Things like that. And so you'd see these sort of edges on it. So I want to make sure these guys jump in wherever they want. So I'm trying to lay the groundwork really wide. Let me just give a, you know, I'm old enough that I've seen word processing come and spell checkers come. And when spell checkers came, people were like, oh my God, our children will never know how to spell. My 15-year-old does not know how to spell, but you know what? It's gonna be okay. Well, he knows sort of how to spell. At the same time, word processing changed everything. It made it easier for people to write polished papers. It was a heck of a lot better than typing. And when I found I needed to change something, I didn't have to retype the whole page. I could just make a simple edit. This is an argument that says that in some ways AI is this new technology that can allow me to create things with very simple prompts, whether that's in writing, whether it's an image, whether it's a song, whether it's music or video. AI just allows me to do that. In this way, it's a simple, it's a part of an arc. But all of these technologies also have an underside, too. The question is how do we get the benefits without the harms? If you haven't tried generative AI, you should try it. Because it's here to stay. It's changing the world immediately. How it's changing the world and the consequences of that change have a lot of adverse consequences that we should talk about. I have spoken a long time. That's all right. It's okay. You want to jump in, Katie? I am the least knowledgeable. I'm just like, I can't believe that they've allowed me to be on stage with Sophia and LaTanya. I will just... I want to call attention to something that is really important, I think. The fact that it could write a paper that sounded like you, right? Because it's predicting your language, but it could not produce a study. And so the limitations and what AI is being trained to do right now is to be able to sound like human beings and to be able to reflect some knowledge that makes it seem authoritative, but it is not, in fact, authoritative. It's just guessing the next word in the simplest terms. Yeah, I think I will just underscore that in artificial intelligence, the most important word of those two is artificial. And the thing that we should double-click on is intelligence. AI, like its antecedents or, let's say, the engine that works them, algorithms are really, I think, thought of within academia or computer science and engineering as being mathematical formulations that do all of the kinds of things that Professor Sweeney here talked about. But we want to remember, if we pull back, that artificial intelligence is kind of like a marketing term that obfuscates a variety of different kinds of social and political and economic practices, too. So when you hear that AI is somehow super intelligent or potentially sentient, those things are just not true. And I think that's very important, especially here we are in the Bay Area. We could, you know, listen, I came of age in Oakland and, you know, Oaktown is my town. And I will just say that many of us who are from the Bay Area, who have lived in the Bay Area, we also know that the tech sector more broadly, I'll say it, has ruined the Bay Area, too, right? In many ways, the gentrification, the unequal wages, you know, the divestment from public education and public goods and public resources. So when we hear about AI, let's say at a meta level around kind of what is the tech sector doing or what are all these AI companies about, they are doing this kind of statistical modeling. And I think in the weeds, that's one part of AI. But on the other hand, AI companies and tech companies are also involved very much in a process of total extraction, capturing data about us from when we have our period to when we're asleep, how we're moving about through the city, our social media engagements, photos of our children everywhere that we're touching a network, it's extracting that from us and turning that into a profit and leaving us holding very little in exchange. And so I think there's a lot to talk about today when we talk about AI and we talk about technology, especially in a beautiful city like San Francisco and here in the Bay Area where we see the uneven applications of AI or technology, which again, leaves some people holding the bag while others accumulate incredible riches at our expense. So I think that's another part of the AI story. Well, that the question I was going to ask next was what are common misconceptions about the technology that we need to debunk? I feel like you've given us one we can really feel here in the Bay Area. Would either of you two want to make sure that there's something we're not misunderstanding? Yeah, I mean, I think plus one, anything I asked to say. But I would also add into that big even other harms to to our society. I mean, in many ways, the current technology, especially in this an election year is really sort of an existential threat to our democracy. You know, when right now on the internet, most of the content I search for is human made generated, someone typed in most of the content within the year. Most of the content on the internet will be an echo chamber from whatever these large language models have produced. And all of a sudden, we as humans become the consumers in part as a part of that echo chamber. And the originality or generative voice gets lost. The ability in the elections to use generative AI to manipulate small parties sort of because you know so much about me in this technological eco data infrastructure, being able to home in and give me a political message, which is different than the one you deliver to someone else influences my opinions and my thoughts about what I think is reality. But it's not necessarily reality. I was reflecting on it earlier today, listening to the speeches outside. And one of the things, you know, I, what did what was it? How did we maintain a democracy? How did blacks know the truth? The idea of Juneteenth came to my mind. For those of you don't know, this was about two and a half years after slavery was determined to be illegal. People in Galveston, Texas, slaves in Galveston, Texas were not allowed to know that. And so for two and a half years, they make, they continued to be slaves until the Union Army came in and told them otherwise. In many ways, that's kind of where we are. Where what did the technology, you are who the technology says you are. You are what social media, even the ads you see are customized and generated to you and not necessarily shared with someone else. How do you know truth? So I do think this is a huge challenge. I do think generative AI is a piece of it, but it really exasperates the problem. I want to ask Dr. Noble, you authored a bestselling book on how search engines reinforce racism. Give us an example of that. I mean, how many Google searches a day do I do? So give us something we could understand. Sure. So when I first started the research that became the book Algorithms of Oppression, I was thinking about the future of knowledge for black people in particular, because we know how much of our knowledge gets co-opted, appropriated, taken from us, repurposed and repackaged under some other communities, values. And I was thinking about what does it mean that librarians in particular were digitizing so many collections? And I come out of this training with librarians, and so I was always fascinated at how librarians think of themselves as people who are responsible for what people will know a thousand years from now, 500 years from now, 100 years from now. It's very noble profession in that way. And here we had this company that I'm sure a couple of you have heard, Google, you mentioned it, where it was kind of self-proclaiming that it was organizing all the world's knowledge. And of course, we know that there are limitations to what that meant. They really mostly meant all the world's knowledge in English, and that already is a limitation and doesn't reflect the world. But I was interested in this tension between people who have a professional and kind of community commitment to knowledge in the future, the very kinds of things you were just talking about, and large-scale advertising companies who kind of portend that they are knowledge keepers and organizers. And so I was trying to figure out how would I talk about this tension and this relationship. And the first study I did, I mean, it's been almost 15 years now, the first study I did is I took the US Census and I took all of the kind of racial and ethnic categories and the gender categories and I kind of cross-searched them. And you can imagine what I found. Now, when I did searches on Black girls, Latina girls, Asian girls, 80% or more of the first page of search results in a Google search were pornography. So Black girls were not just represented with pornography, but we were synonymous with porn. You didn't have to add the word sex, you didn't have to add the word porn. Black girls meant porn sites. And that was really the first place where I was like, well, this is interesting because Black girls and women are not going to be able to control the way that we are misrepresented in these kinds of projects like a search engine. And you said it yourself, how many of us use a search engine every day to find something? And you know, it's not just that we're looking for it, it's that we believe when we use a search engine that it's credible, it's reliable. We assume that it's giving us the first 20 hits out of 2 million possibilities and that someone somewhere has done some thinking that's better than our thinking to help us find those answers. So we have this tension that is happening in society between knowledge organizations, teachers, professors, parents, subject matter experts, journalists, quite frankly in particular, and you know this, up against something like a search engine and who's narrating and guiding us to a deeper exploration or investigation. And that study is really what launched the book Algorithms of Oppression. And there are many, many dozens of examples in this book of the stakes of what happens when we turn over our knowledge-seeking, our understandings about people and ourselves to these kinds of technologies. But of course, now we're living in a time where people, let's say, are more politicized about things like social media, Facebook or Twitter, I refuse to call it X, because X is for Malcolm. And this is my position. So you have social media companies where we do understand a little bit more that we might be manipulated or under, like, we know we're in an echo chamber, so to speak, of the friends that we've made or that we follow. But so many people still, as they come up against disinformation or propaganda, maybe in social media, and they're like, I'm not sure about this. Let me double-check. What do they do? They go search for it in a search engine. So I'll never kind of stop looking at these banal kinds of technologies that we don't really think about. Now, there's a person on this stage right next to me who did a very, very important study in Google. And I really hope that you'll talk about that in your experience with searching for your name, because it's such an important, and I cited that study in my book. When we concede or, like, seed so much space to tech companies to tell us about our world, we find that those who are most marginal people who have the least amount of resource, which is really the thing that I try to do is demystify what a search engine is, which is to say it's not democratic. It isn't just that all the people looking for black girls are people looking for porn. It's that the porn industry and other industries have more money than the black girls and the black women, and so they're able to optimize content and make it appear natural and neutral and reliable. And so it's just an opening. I mean, that book is six years old next month, but it still is kind of so prescient in the moment because there are so many dimensions of our lives that we're turning over to tech and we're conditioned, in fact. I ask my students, I say, look, if Google, if you could just Google your whole life away, why are you at UCLA getting a degree? And they're like, Dr. Noble, and I'm like, I'm just saying, and it's interesting because you can hold up a cup like this when you're here where we're learning together, and you say, if you look at this from the perspective of a search engine, it might just give you the price on white mugs. Okay, it's going to sell you something you better believe. That's the point. It's an advertising engine. But if you asked about this cup in an art class, we might talk about its ergonomic design. If we looked at this in a chemistry class, we might say, well, the compounds that make it need to hold together. It needs to not liquefy, right? It needs to be solid matter. If we talked about this in, you know, any number of courses we're going to have a lot of different points of view and vantage points on this. So these deeper investigations are so important. And it kind of goes back to what Aaron was talking about as he opened this session. Knowing and connecting and having many vantage points is so important. And what the what search engines in particular do is they really flatten our ability to think critically from many vantage points. I'm going to ask him tonight. I think I think I read in your bio that at one point you worked for Google. I did. I'm not I'm not trying to call you out on the spot after what Dr. Noble just said. But I but I did feel like asking if you feel like this issue was centered as it should be in the building of the infrastructure of search. Yeah, well, I'll answer that in part by saying that I often reflect on my time at Google in relation to what I do now and say that now I'm doing penance. So no, and that's the next question about what you're doing now. I want to get this. But so at Google, I worked on on public policy actually in government relations and community engagement, which is an interesting vantage point for Google, because in those days in 2012 2013, don't be evil was still the motto. Allegedly, and this notion that we were doing the best right thing for the future was very, very prevalent at the company and something that I think was driving people to come to work sober the free meals and the big paychecks. But it was driving people to come work on these products and really believe this is life changing stuff. This is stuff that people use every day. We had this notion of the toothbrush test. You Google wanted to produce products and services that were as important as relevant as your toothbrush. You're using it every single day twice a day, if not more. And that was never actually operationalized in a meaningful way that never really meant anything. It was something that we could reflect on and pull into our own sense of being and make ourselves feel good about what we were doing. But the actual issues of bias of the unequal way that our products were being adopted, the ways in which some people are able to navigate privacy rules and regulations and others have no idea what they're signing up for. None of that was centered. It was all about the shiny object, the thing itself, the technology for technology's sake. And I think so much of big tech, the tech industry, and even just what's popular and prevalent in technology today is about I'm just building this because it's the best thing that can be built. We focused a lot on what AI can do. I don't see any or enough developers asking what AI should be doing. What do we need it to be doing? We have incredibly powerful tools, but we don't actually have much say in how they're being used and what they're being developed for. And I think as a society we have needs. Everything's not all good in gravy. We definitely have problems that need addressing, but the tech industry is building a lot of solutions in search of new problems that actually are not the ones that need addressing. So no, I don't think Google was centering this. I don't think they are centering it. They got rid of Don't Be Evil as their motto a long time ago. And I don't know that there's space in the majority of big tech to really center these issues unless society pushes back in a meaningful way. And let me ask you, in your current work, the Segal Family Endowment, what kinds of things are you all excited about in AI? How are you taking on AI? Yeah. So, you know, I run a foundation that does tech and society issues. And so when I tell people I work for a technology oriented foundation, they automatically assume, oh, so you give computers to classrooms or you want to give every kid an iPad, Apple wants to give every kid an iPad. We're looking not only at the opportunity to leverage tech for good, but also there's risk, there's harm. And we're trying to find the ways to mitigate the harm of pervasive technology, of technology that we don't have control over and the way that's infiltrated our lives. So we're excited about research that is telling us the truth about what technologies that are being implemented and rolled out into our world are doing. We're excited about kind of the practical ways in which coalitions are forming to push back against big tech to call out the need for regulation, to call out the need for sort of consumer education, the ways that we can be better consumers of these products. We're also funding some cool work on how we do this better. There actually are some people that want to build companies that are not just about sucking up as much data as possible to feed you back personalized ads. So we funded some work on different models for what we call data stewardship and the way that data privacy is handled by companies. And I am excited on the other hand about some of the ways that you can use AI for good. We're funding an experiment right now and I talk about all our work as being experimental. We don't know the answers. So the point of this philanthropy is to try to find them. But we're looking at whether we can use some new AI tools to actually help flag folks who would otherwise be screened out by resume tools that use AI. Trying to use it for the opposite effect. So to raise those profiles up to the top of the hiring pile. So there are some good opportunities to do cool work too. It's not all all gloom and doom, but a lot of it is. Well listen, I want to just say in terms of gloom and doom because I do think well it's a fact you heard of here that black women and women of color have been at the forefront of completely shifting the paradigm about how we talk about tech as a problematic set of practices in our society. So there's no denying that. And the tech sector itself has had to respond to our criticisms. And our criticisms now are, they're talked about, the harms to society and communities, our communities, people who look like us, are in the papers every week, every month. So while I know that the tech sector likes to think of that as being gloom and doom, I think of it as black women in particular and women of color, people of color, LGBTQ, people of color, quite frankly, also being at the forefront of the conversation about civil and human rights and technology. So in that way, we, it's important that we're having this conversation here today and it is, in fact, in our tradition, in our community, to see things that other people do not see because they are not thinking about us. But see people like us, we are thinking about us. Yeah. And we're thinking about our kids and our generations. Therefore, we ask different questions of the industry and of these projects and of these technologies. And it forces a reckoning. And I think we are living in a moment where, you know, I say in my work that AI will be the most important human rights issue of the 21st century. And I think you can look to, for example, what's happening in Palestine right now or the technologies of war, that's the blueprint for the kinds of things that we will see in the United States and that we're seeing rise of authoritarian regimes all around the world, repression, economic despair. And that isn't really doom and gloom. That's like, hey, you're violating our civil rights and we're not having it. And so I think that is very important and we need to kind of reframe the kinds of things that we're doing to say, you know, black women are often the canaries in the coal mine, I think of this in that way under resource to do this work, but truth tellers. And so all of you, you have your own experiences with these things where you're like, something is a miss. Something is not right with these things that I'm doing. Why did I just talk about X, Y or Z? And now I'm getting an ad on Instagram for that. You know, something's wrong and you trust that we're here to tell you to trust that that we we know things and we feel things in a lot of different ways. We have lots of different ways of knowing and I think that's our super power. I do think that the observation that you're bringing to light, why isn't so many black women are the ones who are doing the research that shows the issues? And I think your point is spot on. It's because we're not in the room that's doing the design and the design isn't for us. Yes. And because of the intersectionality that we represent and our role, we see things that we that that others don't see. Absolutely. We see the experimentation on our communities. I mean, I live in Los Angeles and this is a place that was like one of the first epicenters of predicted policing. We saw predicted policing deployed against me and the neighborhoods that I live in in black neighborhoods. We saw those things. You know, I you know, I know this, I'm not trying to sound flip, but I will just say, you know, we didn't need AI to tell us when the police were coming. Do you know what I'm seeing? Like you could just literally know who would eat who knows. So these kinds of technologies that are about surveilling us, capturing us ensnaring us, either as consumers or as people who will be fodder into a lot of different systems, we see those things. And we're on the receiving worst end of some of the these kinds of technologies as our other communities of color around the world. And I think that's one of the reasons why we talk about and we see these things because there are children, there are family members, there are friends, there are neighbors that we care about. So now I'll tell that story. Okay, tell the story. Story needs to be told. Well, I think it really makes your point, right? So I'm talking to a reporter, and I wanted to show him a paper. He types my name into the Google search bar, Latanya Sweeney, up pops these ads implying I have an arrest record, even though I don't. The reporter says, forget that story, tell me about the time you were arrested. Does Harvard know you were arrested? Did you plead guilty or innocent? He was very aggressive. And I'm like, I've never been arrested. He says, then why does Google say you've been arrested? Going to the sad. Eventually I click the link, I pay the $89 to the company. I show him that they do have a record on Latanya Sweeney because it's a data broker. But there's no arrest record for anyone named Latanya Sweeney. But this became a real question. Why did that ad happen? So I spent three months doing a study and showed that if your first name was given more often to black babies than white babies, an ad would show up implying you had an arrest record, even if you didn't. And the ad isn't placed on your first name. It's not some kind of statistical association. The ad is placed on your full first and last name. So why was this happening? And so eventually what the study showed not only was this happening 80% of the time, but it would also show there was the first time an algorithm, in this case Google AdSense, violated the Civil Rights Act. Because if you have two people and they're both going for a job and one of them types in their name and an ad implies they have an arrest record if they don't, and the other one just says, oh, looking for more information about Julie, then it's a disparate impact. And so this was sort of the, and the Civil Rights Act, something that we're sort of partially celebrating by celebrating Martin Luther King Day, says that that's not okay in employment. Discrimination itself is okay in certain situations, but not in all situations. One of those is employment, and one of the protected groups are blacks, and here this software was making this impact. Now, that's been, we don't know how long that had been going on, right? Right. But it's, and we don't know how many other ways, like credit ads, when I took a year and I served as a Chief Technology Officer at the Federal Trade Commission. It's kind of like the de facto police of the internet. And one of the things I was able to show is how the advertisements for credit cards were different if you were in a group that, if the ad network considered you sort of likely to be a person of color, right? You got the worst credit card ads despite your financial standing, and versus getting sort of the best credit card ads. That's right. And here's what bothers, and we have a law against that too, the Fair Credit Reporting Act, the Fair Credit Act. And, and what bothers me is those are two studies I did, both of which show algorithms breaking the law. But other than those two studies, is anybody in the regulation enforcing those laws? Right. And if those laws aren't enforced online, what good are those laws? Right. And what's so important about those studies that you did is that they opened up this field that many of us are in where we're looking at and understanding things like what does it mean? And Tressy McMillan-Cotton and Constance Ilo, both did studies showing, for example, when black women are searching for educational opportunities in a search engine, that they're more likely to be served up predatory educational opportunities, you know, your Trump University fake, fraudulent, predatory kinds of educational opportunities. They're more likely to be scammed to go into these kinds of environments and get Pell Grants and other kinds of loans. And, you know, so it's like the ecosystem that we are ensnared in that is building digital profiles about us constantly, that we do not understand that we cannot affect, we cannot change the selling and brokering of data about all of us in this room and beyond. What do we do about that? And I think that is really the next frontier of what we think of in our center as digital civil rights, which is taking this incredible work and saying what kind of civil rights laws do we need to have on the books to protect us from predatory technologies like this? And how do we build narrative and art and poetry and everything you can think of to help educate the public about this kind of new, this new realm that we're living in? One of the questions I wanted to ask was what each of you wants from government in this space? Like, what, Ms. Knight, let's start with you. Like, what would, what if the government had a couple of things that it could do, local, state, federal, what would be the best place to start? We need another hour, but no pressure. No pressure indeed. I'll start with what I think are relatively simple because I believe deeply, deeply, deeply in the power of local politics. I think one of the greatest issues that I have with our current sort of political system is not just polarization and Trump and stupidity, but we've, so many people have lost a sense of local politics and how deeply they impact you as an individual and how deeply you can impact those leaders. And so for me, I think demanding accountability in local politics, thinking about the ways in which we can impact not necessarily large scale regulation of tech companies that needs to happen at the federal level, but what are, what are your electric officials doing about the ways in which tech companies are destroying your neighborhoods? Or what are they doing about the tax incentives being provided to these companies to come, you know, provide jobs to people that they actually don't provide? How are your local police forces using these algorithms and facial recognition technologies? How are all of the things that are going on in your day to day life being influenced or impacted by tech and these profiles and algorithms and they're just sort of rolling along because at every level of government, we do not have enough sort of know how to combat, I think, the level of sort of lobbying in the information campaigns that tech companies are running to try to maintain this control that they have. But I think at the local level, I have found a much more open year and a much greater sense of accountability because it takes a lot fewer votes to be in and out of office as a local elected than it does to be Donald Trump. So, well, I guess if you're Donald Trump, you can have any number of votes and still. Never mind. But where can we impact those problems, I think, is something that I've been thinking a lot about for the near term because it is deeply important and I think there's traction that we can gain. I love it. I'll go second. You do sweep. The we are here today of Martin Luther King Day. I mean, in many ways, I'm very much a product of the civil rights movement. You know, I grew up in the south. My uncles sat at the counter at Walgreens in Nashville as to say that blacks could sit there and, of course, they were arrested and so forth. And when I reflect on it, I think of all the lives lost, the battles won. And those battles that got won were a whole suite of laws. In other words, when they were asked to sit on this seat in this chair and ask that very question, what can government do to help? Their answer was a series of laws that got passed. And those laws don't work online. That's not OK. So much of our lives are controlled and dictated while we're online. I want to know that the things that they fought for are being protected. Those rights are being protected online through all of the ways in which technology interacts with us. I can't speak that. That's it. I can't speak that. It was perfect. It's true. Employment, what can workers do? What should workers, employees know about AI and how they can advocate for themselves vis-a-vis their employers? Well, I'll jump in on this because we just had a major battle in Los Angeles around the use of AI by their Writers Guild of America and the Actors Union. And they have been very successful so far in raising the visibility of what AI can do to decimate their workforces. And so I think we should be incredibly mindful of, again, the over promises of what AI can do and where it is constantly introduced in a way to depress worker wages, displace and replace workers, and obfuscate the incredible drain of resources out of the public coffers that also affect us. So of course, workers, we need to be organized and or unionized. We need to be working together. The best knowledge about where AI, like the rubber meets the road, is with workers. Whether it's domestic workers or hotel workers in Las Vegas who are like, wow, now that you check in and check out on your phone instead of having a worker in a hotel help you, that's completely decimating that workforce. Whether it's a city and state workers and federal workers and understanding all the ways in which these kinds of technologies again, I mean, the raid on the public coffers is selling software and hardware into companies, into industries, into governments to pull and extract that money out. I mean, it is truly horrific when you think about what has happened to workers and what is happening to workers and the public good more broadly. So if there were ever a time that we could be talking with our colleagues and our coworkers and our neighbors and our friends and in our cities about what we can do different and where we can have moments of refusal, this is an important time for that. And I would say, especially as we're coming up into the next election where I think we are going to be flooded with a lot of propaganda in politics and desires to kind of fragment us and keep us from talking to each other, it's really important that we do that. And workers aren't organizing, in fact. So there are lots of people who are meeting and talking about how to thwart these kinds of intrusions. We need to keep that up. Did you want to add anything? I'm going to plus one. OK. You're just going to plus just high-viving each other. So so we have basically a minute left. But I but I I feel like young people should know what role they have to play in the future of. Absolutely. And how do each of you see them engaging in around how AI is deployed? I'll jump in. So one of the best things about being a professor is that you're always engaged with perpetual 20-year-olds. And and they're young enough and naive enough that's not jaded by the world. But at the same time, they believe they can change the world. And so the students call my classes the Save the World classes. But it is really it's really me. I get inspired by them. They give me tremendous hope for the future. They see a world honestly as messed up as it is that we are not giving them the world. We're not passing off to them the world the way we received it. And they know that they are left with that. And the fact that they know that and are leaning into that is really really amazing. When I started my career, I'm a computer scientist by training. When I started my career, most of my students made a B line to Silicon Valley. Now, most of my students want to do that kind of work in technology, but now question, that's not the way to do it. And they're looking for other options and fellowships elsewhere. And so that gives me great inspiration. I'll pass it. I mean all of that. And for young people in particular, I always say don't let the world tell you that you're not allowed to ask questions. Why is a really important question? Why do I need to sign away my rights to use this product? Why are we using this in our classroom? Why do we have access to this and someone else doesn't? I think those questions, that curiosity and that sense of agency is something really important to hold on to. And I think tech companies are part of trying to rip away our agency as much as possible to get us caught up in their loop and cycle of just more and more products and services they can offer us. And so hold on to that agency and that questioning and build the future. You don't have to be a technologist to be involved with technology in a meaningful way and change what it looks like. Yeah, let me just say, I think young people and old, we need to have a power analysis. We need to question systems of power. And if they're working for us and, of course, we know how much they're working against us. And the best thing about being a young person is that everything is fair game when it comes to understanding how power works against us. And I think that is the thing that, I mean, AI and tech is a climate justice issue. It's a human rights issue. It's a civil rights issue. It forces us to question whether capitalism is the right system for us to live under. It forces us to question whether we want to live in a world where we become indifferent to the pain and suffering of others, whether they're living on the streets and we're stepping past them, which is so incredibly painful. We cannot be desensitized and young people keep us honest and keep us sensitive. And I'm so grateful for them. I'm grateful for my own kids. I know we're grateful for what kids do to us because they ask really hard questions like, why? And that's the gift they are to the world for sure. Thank you, all three of you. Thank you. This is Dr. Safia Noble, an internet study scholar, professor at UCLA, faculty director of the Center on Race and Digital Justice. Dr. Latanya Sweeney, professor at the Harvard Kennedy School, founder and director of Harvard's public interest tech lab, and Katie Knight, president and executive director of the Segal family endowment. Thank you very much. Thank you. Thank you.