 Good morning or good afternoon everyone depending on where you are. My name is Cecilia Munoz, I'm a senior advisor at New America and it is my job to welcome you all to this conversation about system error where big tech went wrong and how we can reboot. We are very excited about having this conversation today. We have all three authors with us. I'm going to introduce them very briefly and introduce my co moderators and then we're going to get this conversation started. So let me start with Rob Rich who is director of Stanford University Center for Ethics and Society co director of the Center on philanthropy and civil society and associate director of its Institute for human centered artificial intelligence, a busy guy. Mehran Sahami, an early Google employee who helped invent email spam filtering technology. Thank you so much. He is currently the associate chair for education in the Stanford University computer science department and Jeremy Weinstein, my friend and former colleague who launched the Obama administration's open government partnership and currently is both a professor of political science at Stanford and leads Stanford impact labs, which partners research teams with leaders in the public private and social sectors to try to solve public problems. Welcome to the three of you and thank you for writing this wonderful book. I also have a couple of amazing co moderators for this session. There were so many of us who are excited about this book that we are just all in on the conversation. Starting with Ann Marie slaughter who's the CEO at New America and Tara McGinnis who leads New America's new practice lab which is sort of putting public interest technology into practice in the service of family economic security. As we start the conversation and Marie I'm going to turn to you first to ask the first question. I'll just let everybody know that we're using something called Slido to submit questions. So if you have questions Slido is the box located to the right of the video. And if you have any issues using it you can contact events at new america.org. If you have any questions but please look for the Slido box next to the video to start asking your questions. And with that let's get this conversation started and Marie over to you. terrific. So I have to also start by just saying this is a terrific book. And it's a tested book because I happen to know that the three of you taught a course in this general area for quite a number of years and I have a very good friend and colleague who sat in who said she just found it revelatory. I want to start by saying how terrific it is that we have you know computer scientists and ethicists and ethical philosopher and a political scientist who are who collaborated on such a complex subject. I really do think we need more of this particularly when we're talking tech, because, you know, we're not going to suddenly understand the computer science nor the political science nor the ethical consideration so just my plug for the book. Jeremy I'm going to say I mean I'm a policy person you're a policy person I'm going to start with a more policy question, because you know given that the title is you know where big tech went wrong and how to fix it. In Washington, that conversation often defaults to break it up. But that way it's an antitrust issue that's the problem they're too big their monopolies, we should break them up. And so I want to start by saying, what's your all's view on that question or how would you frame that question and answer it. So thanks so much and Marie Cecilia and tar for having us today it's such a great pleasure to be with this group, and in many ways the three of us have been a part of a burgeoning movement around public interest technology that New America with Ford and others has really helped to create. We see that as just an extraordinarily valuable community to be a part of as we're having these really hard conversations. And so thank you for your leadership on that effort. I wouldn't let the issue be framed as antitrust on its own and in fact, one of the central arguments of the book is that, as we confront a set of successive issues where new technologies have not only extraordinary benefits but also evidence societal harms. We do ourselves a disservice if we don't try and understand the roots of those problems what are the systemic features of the of the repeated cycle that we see of new technologies having societal effects. And for us it's not only about policy or the failure of policy it's really three ingredients number one, an approach in engineering that we call the optimization mindset that really is at the core of why when new technologies are developed we can't think of them as value neutral. They basically surface and prioritize the achievement of some ends at the expense of other ends, and ultimately right now those decisions are being made inside companies. Another element is a structure of the venture capital industry that basically scales new technologies before we understand their potential harms. And so the focus on on unicorns and the desire to dominate markets quickly has these kinds of social effects at a very large scale. First the third piece is the role of our politics, and the role of our politics isn't just the present day question that we face about antitrust, but in fact, a successive cycle of of a race between democracy and disruption that unfolds with every new technology. And what we have in this context is isn't is a sort of a vowed commitment to creating a regulatory oasis around big tech from the 1990s into the present. And so what bring where that brings me is just to say that our answer is is not that we're going to solve this through one regulatory silver bullet whether it's antitrust or data privacy law, or audits of algorithmic decision making, but that we're actually going to have to tackle these issues by looking at the the ethic of responsibility in technology companies, thinking about the corporate power that has been concentrated in a small number of firms, but also how we set in place a set of democratic institutions that are capable of governing tech. And in the book, we look at all three of those arenas as places for action. And the good news is that we're starting to see a conversation in Washington where people are waking up to the problems of the present people are waking up to the set of regulatory issues that are on the table antitrust among them. We also need to keep our eye on the problems that we can't yet see, and recognize that if we, if we don't address these underlying drivers of the problem. It's just going to be the next technology that generates harmful social effects, and a data privacy law won't have solved that, nor will the breakup of a couple large companies that won't have solved that problem either. Great. Tara, why don't you take the next question. Sure. I was really struck reading the book that it occupies a different space from other pieces. And part of that I do wonder is whether it comes from your diversity of backgrounds, and three Stanford professors but with really different roots in political science and computer science. So I was wondering if you could talk a little bit about the process of coming together and how that impacted breaking out of having a explicitly technical book or an explicitly put, you know, a political ethical book. Can we start with you. Sure. Well, part of the, the impetus for the book was seeing that on campus there had really been a shift toward engineering and computer science over the last 10 years. And this is not true just at Stanford but it's a national phenomenon at Stanford specifically enrollments in computer science have grown by about 350% in the last 10 years. So it's the largest major by far on campus, about one out of five undergrads majors in computer science. And this shift had really, you know, taken a lot of students away from other disciplines and so one of the impetus is was to think about how do we have a larger foundation where we get the computer scientist to think about issues from other parts of campus to think about the social science that might be involved in evaluating technologies to think about the philosophical background and value trade offs that are involved in building those technologies. So we came together and for a year we spent just talking and discussing and putting the class together before we even offered the first iteration of it. And then we started talking in one language to talk as to, you know, what are the different aspects of the different fields we come from, how do they fit together, how did the ideas kind of his puzzle pieces begin to form into a larger picture. And that's what we wanted students to take away. And from doing that class as we were going through it and building out materials and having conversations with students and speakers who would come in. It was clear that we wanted to try to reach a larger audience that became the seed for the book itself is how do we take some of these ideas, form them together and something that's cohesive and comprehensive that we can provide to a broader public. And part of the reason for doing that is we really need public engagement as Jeremy talked about to deal with these issues in a systematic way. It's about having technologists try to affect something inside a company. It's not just about having venture capitalist or executives make different decisions. It's about getting everyone involved to understand the role they play in the larger policy issues in our democracy to achieve better societal outcomes. Robert Jeremy anything you want to add. I'll just add an anecdote that sort of crystallizes for me, both, you know why the course seems important and you know we use the language when we were developing the courses that it wasn't just we thought it would be intellectually interesting to have a philosopher, a public public policy expert or social scientists like Jeremy isn't a technical person like Maron to come together. It was aimed as a cultural intervention on campus. And why did we want a cultural intervention so here's just an anecdote to crystallize it. I was teaching five years ago and introductory course in the fall semester for the first year students and and I had in my class that was a philosophy class. You know 100 or so students that came to my office hours a bunch of them a couple weeks later so these are students only been on campus for four or five weeks. One student came and I making small talk said you know what you thinking of majoring in what do you want to do when you're at Stanford. And he said I'm going to be a computer science major, absolutely for sure and like I had startup ideas already when I was in high school and I just can't wait until the next semester when I can take the venture capital class and meet the people are going to fund my next startup idea. And he said oh well so what's your big startup idea, and the 18 year old, you know totally sincerely and earnestly looked in my eyes and said well, Professor to tell you my startup idea I'd have to ask you to sign an undisclosure agreement. And I was sort of taken a back at how how the socialization of the whole Valley's ethos had already sunk into the to the first year students on campus. I'm wrong of course with having a startup idea, but that sort of symbolizes for me, the conveyor belt that Stanford has created where you hop on the computer science major early, you do really hard work it's not an easy major. And then the world comes knocking on your doorstep to induct you into a startup company or a big tech company, and you don't really get much training necessarily and thinking about the social science or ethical considerations that go into the power of big tech. And that was the, you know important aspect of the course, and then what we hope to do in the book as well. So I'd love to follow up on that if I may, and with the question to you Robin particular. One of the points that you all make in the book is that technologies are encoding values. And I'd love for you to explain a little bit what you mean and give some maybe give us some examples. Absolutely. Yeah, there's an old thought that you know what technologists do is somehow more objective, or at least value neutral compared to human decision making which of course we know is biased and flawed and irrational in so many different ways. And what a algorithmic model or a deep learning machine learning approach to a problem can do is provide some sort of scientific or technical objectivity. But what really happens is that these models encode within them a set of choices about values. And at the moment, the only people who get to have a say in how those values are decided or traded off are the technologists themselves so let me give you a couple examples. Think about the messaging text messaging service that you use whether it's Apple's I message or WhatsApp or signal if you're really concerned about your privacy. So there's a technical system and end to end encrypted messaging platform that attempts to prioritize individual privacy, neither the government nor even the company can inspect the content of your message if you're using an end to end encrypted platform. But of course as we all know, there are other considerations at play when we think about what goes on in text messaging. Apple just announced recently that it was going to try to inspect the photos that are uploaded to iCloud in order to try to prevent child pornography in various forms of sex trafficking. So text messaging can also be an arena for terrorist coordination so there are values of national security as long as as well as personal safety. And when technologists decide well it's just really cool to optimize for privacy. That is is a good thing, but it's not the only good thing. And you can keep going you know we all know on the social media platforms Facebook Twitter, Instagram, whatever the are. Well there's a commitment in certain respects to freedom of expression, and that's a good thing. And yet these platforms are a forum for the proliferation or amplification of hate speech and the insult of individual dignity that happens on a routine basis in the platforms, as well as of course, the pollution of an information ecosystem with misinformation and disinformation. So, at the moment, the only people are in particular like Mark Zuckerberg is the governor of the speech environment of four billion people. Choosing literally on his own or his own authority alone, how it is to balance freedom of expression versus dignity versus the informational health of a democracy. That's way too much power for one person to have. And the value tradeoffs I'm just identifying are the beginnings of how any technology and codes within it, a set of important choices that we all ought to have a say in refereeing. I've got many, many questions that just jumping from that but I guess I'll start with Mehran, particularly on this optimization question, right I mean that and you know we've written papers on it's the business model right it's really this this idea of as you say efficiency above all personalization, right to the point that, yes, you know a Facebook user can can see things on their feed that will help them but so too can an advertiser target ethnic hatred, right in a way of. So, so Mehran, maybe you could talk to us as a computer scientist about how, how do you regulate these algorithms, so how can you do you create other companies that have better algorithms and hope that there is then competition but then, you know, Facebook's got four billion users how are you going to compete about that is it is it interoperability I'd love to hear from you a more technical explanation in layman's language. Great question. And part of the issue is that there are technologies that we can use in some sense to monitor our other technologies. So one thing for example with content moderation is automated algorithms and Facebook does this some of the other platforms that do this to varying degrees of success in different areas to build algorithms to do that content moderation to identify things that might be hate speech or might be bullying or whatever the case may be, and then either filter from the system or flag it from the system. But there's a lot of knobs in there to think about how strictly you want to do that content moderation, and also how much virality you want to give to a piece of information. And so, you know, as the old saying goes, you know, lies spread halfway around the world before truth gets its pants on. It doesn't have to be that way, we can actually use the algorithms to slow down the lies that spread around the world by allowing for the amount of replication of a piece of information to be limited until that information for example can be verified we can do those kinds of things algorithmically. And at that point it becomes a question of the policies of the companies what they actually want to do. And that's the antitrust question before and that's an interesting one right sometimes some people wonder if I took Facebook and broke it up into 10 little Facebooks with that solve the problem. And by itself it probably wouldn't unless we got some really strong as you mentioned interoperability guarantees that allowed for information to migrate between different platforms, because one of the biggest powers that these platforms has is precisely the network effect you want to be on a platform or all your friends are there, and that creates a monopolistic effect where everyone gravitates toward the same platform. So even if we broke them up we probably actually just see a reconciliation, unless there was really a way to share data between them. And there have been proposals to do that it's just not in the interest of the companies to adopt them which is why regulation becomes necessary. It's like making your cell phone number portable right because none of us would ever get a new cell phone if you had to give up your number every time. Exactly. I'm happy to jump in and maybe. Sorry, so were you. The, there's a chapter in your book called can democracies rise challenge. And I would really love to hear, you know, maybe starting with Jeremy, you, your respective answers to this question, both about democracies writ large and really how about ours. Thanks Tara. I mean, you know this is a moment where and, you know, many of us have shared this experience of having had the opportunity to serve the public interest in the federal government or state and local governments where the polarization that we see in the country and the paralysis that we see in Washington is is deeply dispiriting. The central argument of the book is that there is no way out of the toxic mess that we think we're in within big tech. Without energizing our democracy around this challenge. We basically see the regulation of big tech. You know, but regulation can be a loaded word so what we really mean by regulation is using our political institutions to help us navigate these difficult values trade offs. So that these value trade offs aren't simply made by technologists and the companies where technologists work. We see that as basically one of the central and existential challenges for democracy in the next set of decades. And people have remarked upon, I think to us, the surprising optimism that they read in our book about the potential for democracy to rise to the challenge. I'm going to say a bit about why I do have a sense of optimism and hopefully Rob and Maron will add as well. The first is that as three faculty at an institution that is one of the major sort of trainers of the next generation of technologists in Silicon Valley. I see an attention to these concerns that we're teaching about and talking about in the book that wasn't there five years ago. It's a changing dynamic among young technologists, which is no longer a sort of rose colored glasses about the ambitious and world changing mission statements of big tech companies but in fact, a recognition that technology has benefits and also harms, and a desire. As they think about the use of their own labor to be associated with companies that are making choices that they can live with. That's really powerful to see. And of course, these companies are in a desperate race for talent. And so if that's where the 22 year olds are, that's a really important driver of change. The second important driver of change is inside companies themselves, and whether it relates to the debates within Google about the relationship between Google and our national security infrastructure in the United States or the tremendous pushback. That's what we mean on how companies have handled sexual harassment, the movement about the selling of particular technologies to the police facial recognition technologies. We see a set of incredibly powerful people who have not seen themselves as having agency, coming to realize that as the drivers inside these companies that are designing technologies that are thinking about the user interface that are working on regulatory issues that they should have a say as well. And that's incredibly powerful. And then of course the third piece is that over the last three or four years, we've moved from the caricature of the relationship of democracy to these issues which was embodied in Senator Orrin Hatch's interaction with Mark Zuckerberg where Hatch seemed to understand that Facebook had a business model that was built on advertising to a moment that we're in now coming out of the investigations of the House Judiciary Committee and the appointment of new folks in antitrust both at the FTC and DOJ, where a really nuanced understanding of what's going on with big tech is really on display on the part of our policymakers. So polarization and sort of party disagreement are important challenges to tackle. I don't look at this with an electoral cycle approach. I think we're actually entering a policy window. And we've seen that policy window, sort of every 30 years when it comes to new technologies we saw it with the telegraph we saw it with telecommunications, we're going to see the momentum of our democratic institutions to address these issues. It may start with lower hanging fruit like data privacy and data portability or auditing of algorithms, but it's also going to get to the hard stuff around content moderation and antitrust. So I see it happening before us. If I could hop in here, just to add something to this lesson the spirit of what Jeremy just described as the ingredients, or the different components that that give us a sense of optimism about democratic institutions rising to the challenge, but to put, you know, really sort of stark way the stakes that I think are important to communicate. So, you know, I think that one way to describe the past 30 or 40 years is that we're perhaps exiting a moment that you could say started with the Reagan Revolution, in which the smaller that government was the better that government happened to be, and a reliance on market solutions for most of the problems that beset society in the globe. You know, when you think about the Silicon Valley orientation to this, I sometimes think it's a fair characterization to reduce this to a simple formula. There is a libertarian streak to the founders of big tech companies that's well documented in the social science literature and surveys of founders. And when you map the libertarian approach of the founder onto the optimization mindset of their employees, what you get is an optimization of the minimization of government. And at the extreme, the optimization mindset is always suspicious of democracy itself, because in our view democracy is not an institutional design for optimizing any particular thing. It's a distinctively fair process for refereeing persistent and ongoing disagreement, so that we can get temporary solutions to problems that always can be updated in future times. So that means that the optimizers in Silicon Valley, the libertarians in Silicon Valley, often have a deep suspicion of democracy itself. And now that they've grabbed so much power, and we're at a moment of exiting an era in which we've just relied upon the technologists to operate their enormous power with some beneficence over us. Our democratic institutions can come back to the fore and you know I'll be even sort of uncharacteristically even more optimistic than Jeremy here which is that it's not just that democratic institutions can sort of rise to the challenge. It's that in tackling this distinctive challenge, it will be a way of rejuvenating the faith that ordinary citizens have in democratic institutions to tackle even broader problems than technology itself. That's perhaps more hope than you know social science, so to speak, but I do think it's a defining challenge of our age to remind ourselves that what we do collaboratively collectively when we as equal citizens co author our own existence rather than relying on experts to solve problems for us, we steer ourselves into a better future and we're I hope we are about to enter an era in which all of us feel that collective power more than we have for the past half century. So I love a good optimistic, especially of these issues so thank you for, thank you for presenting such an optimistic view and it is unusual for me to sort of push back on somebody who's seen the class that's half full. But I so we're also we live in a democracy where most of us do not consider ourselves experts in this technology. I mean even even the dinner table conversations, you know that I've had with family members over the last couple of months have included sort of lamenting about how much power the tech companies have from the point of view of like, this is the thing we don't understand. We're citizens in this democracy but we don't know what to do about it. So, and, and we are more than a little nervous about the state of our democratic institutions, given the givens right so what's your advice to the regular person who isn't necessarily well versed in how all of this works, but who knows we have a problem and may sort of comprehend that it's kind of us to up to us in the democracy to solve the problem like do we are we left with just assuming that our policymakers are going to figure it out and hope that they do or is there something that regular folks can do and so maybe Jeremy does maybe this is a question for you. I'm going to pass this to Maron to start because I know he loves to answer this question but I'll add add on top. So I think education is really the starting point right is to understand first of all a little bit about the technologies and there's a lot of information out there and part of our role as educators is trying to do that is to have students both not from only from the technical side but from the non technical side try to understand that along those lines we've also been teaching an evening class for working professionals out in Silicon Valley where we're located. And the idea is that the more education we can bring to this the more understanding that people can have they don't need to know the deep issues involved in the technology, but they do need to get clear about what the technology can do and more importantly what it can't do, because sometimes we hear these promises from tech companies around self regulation and we'll figure it out ourselves with technology. And sometimes the technology just isn't actually at the point where it's going to solve these problems and we need something more. This is the first step the second one is then to get clear on the values that we want to have once we actually understand a little bit about for example, what machine learning can do and what it can't do what sort of biases there are in these systems, or how effective they actually are content moderation doesn't require that deep understanding but it does require us to think about how do we think about the notion of content moderation of free speech where do we stand on the issue of something like algorithmic decision making that's making decisions about our lives without having say transparency or due processes how these algorithms work. Once we get that clarity for ourselves as to what we want then it becomes much easier for us to engage in the political process. And I'm also optimistic because I think there's a lot of low hanging fruit before we even go after big issues that things like antitrust. There's things that both the left and the right can agree on things like comprehensive policy around data protection and data regulation. As a matter of fact if you just look at that specific area what you see over the last few years is the, you know, the state of California the government of China and the EU, all coming about with very similar privacy regulations but coming at it from entirely different standpoints and from entirely different reasons. This shows you how important a piece of legislation like that is. And so that's the place that gets me optimistic to think that there are places in tech regulation right now where we can make those kinds of inroads and lay a foundation for incrementally being able to solve bigger and bigger problems. The one thing I want to add add to Marron's answer is to say that, you know, in our current conversations about big tech, it's very easy to point fingers at the CEOs of these companies and to say the problem rests there. The outcomes that we see in the country are a function of that person's choices. And you can see that with each successive round of stories like the Wall Street Journal stories last week on the Facebook files. We have this overriding focus on what's happening in the companies and the particular personalities of the leaders of these companies and I think they're two critical messages of the book number one is, is not enough to focus on the personalities. Yes, extraordinary power is in the hands of a small number of individuals, but we have a set of systemic drivers that are creating successive iterations of this kind of concentrated power. But the second thing is that there's, there's blame to be shared and the blame is to be shared with our elected politicians this technology that Rob described that we use to referee our disagreements and to help us arrive at judgments about how we amplify the benefits or mitigate the harms of particular developments in society. And so that critical last step of Maron's sequence that he described which is looking at those people that we put in office and holding them accountable for the failure to mitigate the harms of technology so not just blaming Mark Zuckerberg, but saying to the President of the United States and our elected senators and Congress people, where are you on these issues, we can do that once we have clarity about the values that we want to see reflected in society at large how we want to value balance privacy and personal safety how we want to balance the returns that we can get from algorithmic decision making against our concerns about justice and fairness. Everyone can have a view on those issues. The book is trying to empower people to articulate those views and engage with neighbors and friends to develop an ability to talk about them. But then we need to turn and orient that pressure, not just on the companies, but also on our political system to say we're dissatisfied with the outcomes that we're getting, and part of that failure is the failure of our politicians. As you were talking Jeremy I was thinking you know you could write an equal book called system error, where our politics have gone wrong and how to reboot right many of us, you know, New America works intensively on major structural political reform ranked choice voting open primaries multi member districts I mean how to be actively not just represented but represented in a way that allows us to get the results so I was listening to you, completely agreeing. But it, we have to educate the voters they have to hold people accountable and believe they're empowered but they also have to be, they need to be empowered to change the system. I'm going to round a little bit and Rob I will start with you on on the, as a sort of diversity question and tech, because, as I read this, and part of educating the public is to realize that, you know, as Larry Lessig wrote 20 years ago, right these decisions are creating the architecture of our society right it's code that determines how we interact, but it's invisible architecture. It's being designed largely by white men, also, you know, other other men, mostly men, very few African Americans, very few Hispanic Americans, really it's it. If you put the population of Silicon Valley up against the census, you would not be happy with what you see, how do we tackle that and I guess I'll start by asking, are you seeing a change in terms of the people who take your courses. And what, what should we do about it. Sure, well let me start with that that that question about are we seeing a change on campus and in the course. It is true and this is a positive development that went when Merron described at the beginning how large and popular computer science has become as a major, it's also the most popular major for women. And there is indeed much greater gender balance in engineering or the computer science major at Stanford than the historically has been. Well, for a more inclusive future within the tech world and again in the spirit of public interest technology, not just for people who take their tech talent and flow into Silicon Valley companies, but also go to work for nonprofit organizations or public agencies and deploy their tech talent in new ways. And diversity and inclusion question seems to be essential and at least for two reasons there's a common understanding I think that there's just a simple unfairness that all of this extraordinary generation of wealth has tended to fall into the hands of the, the founders and the, and the, you know, the programmers who tend to be male and hardly representative not merely as stacked up against the census but certainly since their products can affect the entire world, compared to the whole world, whole world. And it's true that if you thought about just wealth generation and an equitable distribution of the, you know, the, the, the, the product of the, the, the financial product of these services that are of course gained from an active industry of taking data from people across the world and then selling for many companies that off to advertisers. Most people are not being fairly treated but I want to emphasize a second dimension that I think is also behind your, your question and Marie which is that if we have a more diverse class of founders and technologists more generally than people who bring their lived experience, whether it's because they belong to a racial or an ethnic group different than the dominant group in technology today, or they happen to come from a different country and bring a set of different lived experiences. We should expect to find a different kind of technology product or the solutions that will be on offer may well be different than the kinds of solutions that we see in technology today. And I think about here, I want to be in certain respect as maximally charitable as I can to the 19 year old computer science major at Stanford, because what that person typically thinks is it's an extraordinary development in the world that a 19 year old can come to campus get these technical skills. And, you know, contribute via programming or coding to some type of product that when I, you know, change the code base over the weekend, it rolls out on Monday to hundreds of millions of people in principle. What else could you possibly major in or do as an as a kind of profession in which a 22 year old could contribute to the power to affect the world in as obvious a way as tech can. So what's missing in that is the very idea that a small number of people have that much power without bringing to the table the voices of those who are affected by the very technologies they're developing. So, if you can roll your product out to millions of people in the world you had better understand what the perspectives are the people who are going to use those products and be affected by them why because they encode the values that we went back to at the beginning. And the simple kind of delusion of, you know, hearing Steve Jobs tell you don't you want to put a dent in the universe, and then thinking you can do it by coding overnight in your pajamas and rolling your code or product out, you know, on Monday. You've got to be way more discerning about that which is again why the ethical frameworks and the social scientific orientation to thinking about technology is so essential. But more diverse and inclusive tech sector will produce different products that are mindful of the benefits and harms that tech produces. Now you're singing Tara song. And Tara before we jump back to you I just want to point out that we are going to be taking questions from everybody who's participating so look for the Slido box to the right of your video. And please send us your questions we'll be getting to them soon but Tara, thank you for letting me interrupt and now back to you. Okay, maybe just picking up there are there. Have you seen any, you know, really taking this to what it looks like when it's when it's working. Are there any promising conversations or, you know, early parts examples you've lift up from the book or elsewhere from your research of bringing broader voices in and how that makes a difference. And one of the things we've seen in our introductory class for example which is now taken by pretty large percentage of all undergrads is that it's reached basically gender parity it's on a regular basis close to 5050 between men and women. So we're at least seeing at the entry point the place that we can understand the beginnings of this technology and what's possible to do with programming more of a gender balance coming in. What we're also seeing is that those students are taking those skills to a number of different fields. So they're going to think about how can I use programming and the affordances it gives me say to do data analysis or something like that in other areas like in medicine or in politics. And so there's a two way street where we can begin to have some of the ideas from technology infuse into other disciplines as a tool to be used to empower those other disciplines. And at the same time it's bringing ideas from those other disciplines to computing to think about how do we get some of those problems in front of more technologists to actually have an impact. What we're seeing as time goes on is these questions Rob alluded to of who's building the technology who are they building it for. I was talking to one venture capitalist a couple years back who said, you know we have a whole bunch of companies that are basically their business model is providing lunch to other startup companies. Right and it just shows you how insular that environment is. And when we begin to open it up for people to think about broader issues. And we had one student in our class a couple years ago who was looking at the impact of privacy settings and different products on domestic abuse victims. And one of the things she had in mind was why do I have to go to all these different platforms and set all these different Baroque privacy settings. If I want to try to protect my own privacy from someone who is stalking me or someone who's trying to track my information I'd like to do that in one place in a clear way and just have it propagated to every platform I use. The idea is so powerful that she started with an impetus that was a particular group of people based on the experiences that she'd had talking to others in that field, and other people when they saw it said this is such a compelling idea it's actually important for everyone. Right and that's the place where we see the power of these diverse viewpoints is finding solutions not only for small communities but the interest of those small communities oftentimes generalized to everyone. So how are there and we're beginning to see it. I just have to intervene to say I thought of that this morning when the news was that the EU is thinking about legislation to demand one universal charger cable for all your devices. Yes, somebody is identifying with you know the frustration of when you forgotten your charger and you have to get it whatever but not not as profound an example but one that would affect all of our lives. And that's one thing to Maron's comment, which is, you know, and maybe it goes without saying but you know the pipeline is only one, one piece of the problem, and, and there's a lot, you know, to feel good about with respect to how the pipeline into tech, you know is increasingly attractive to a diverse community male female people of color and the like, but the institutions that are the receptacles of new technologists have got to be transformed as well. And I think about that both with respect to the private sector which is at the center of our conversation, but also the public sector so I want to say something about each of them. So, so with the private sector, you know, we are still in the early stages of of momentum to really overturn kind of dominant norms and practices in large companies that have made these, you know incredibly inhospitable environments for women and for people of color. There are strong and powerful voices, you know, in many of these companies but lots of those voices are leaving the companies, right and on the way out they're raising concerns about cultures that are perceived as as toxic and unwelcoming. Raising voices has been a really important driver of change the transformation that happened in Uber, as a result of the concerns that were raised by Susan Fowler were were transformative in terms of its internal culture. We're going to need to see more of that because ultimately if you want to see the translation of this new diverse community of technologists who are being trained and have them find a place for themselves in the private sector. The private sector is going to have to welcome them in their whole selves with the concerns that they bring to the table the issues that they're raising, and we shouldn't pretend that that's naturally going to happen. We're in a contested space, and we're still seeing the beginnings of that the same goes for the public sector and we're talking here with with folks who've been really interested in how we position our public institutions to be more welcoming of technical perspectives and technical the public institutions have a lot to offer our young technologists because these are some of the arenas where the kinds of concerns that motivate people. That is, how do we optimize to make sure that that well being is something that's broadly achievable regardless of their background or regardless of their identity. How do we make sure that broadband is accessible in the last mile. How do we ensure that that sort of the achievement gaps or sort of outcome gaps that we see in health and education are actually reduced. These are places where public institutions have a critical role to play. And I think part of what we're trying to do in our, in our efforts in class and with professional technologists and also through the book is to reject the idea that the only way to engage on those issues is through the public sector, this notion of, I can get fabulously wealthy and make, you know, social change in the world that is the path of the 21st century and I think we're seeing the limits of that model. But for the public interest to become a meaningful direction that people pursue. They need to see pathways into the public sector, where they can actually bring not only their interest to the table but also their capabilities and see those capabilities put to use. And that's why this movement around public interest technology is so important. So I can't resist diving into that a little bit more we're going to go into the questions from all of you and I'll just remind you again to use the slide O box to the right of your video to submit questions and as soon as I ask this one and Marie I'll ask you to to ask the first of the questions that are coming in. But I can't resist. Since I've worked on public interest technology at New America digging into that a little bit more. It seems to me that in this conversation, in general, that all roads still lead through the tech companies so to the extent that we're trying to diversify the skill sets that go into government or into NGOs to try to help us solve our public problems. But there's still really in a conversation about how do we, how do we bring talent that right now is going to the tech companies and to other sectors. And one of the, the, and we're also in a deep conversation about diversifying the kind of talent that goes into tech companies which I also think is a necessity and I'll just say to put in a plug for my colleague Andrew and Solis work in America with many other public interest technology university network Stanford is a part of it but so are some historically black colleges and universities my other minority serving institutions. And part of the goal is to diversify who goes into the tech sector and what is it that they know like get a few more people who think about philosophy and ethics for example are about civil rights into the tech companies, and also create an environment in which the way I like to put it is somebody who's in middle school right now decides that they want to solve homelessness so they decide that computer science or engineering or design is, is their route to social change like we haven't reached that transformative moment yet you were all in a university setting you you built a course to try to sort of diversify what the many many computer scientists at Stanford are getting by way of training, but how do we kind of change the equation so that the people who are deciding to become computer scientists are people who will want to who want to change the world somewhere other than at a company. Maybe I'll take a crack and start start on that that fantastic question, because it is indeed something that we've been thinking about ever since we first came together to conceptualize the course. And sometimes the way I think about it in a kind of broader time horizon is that in a really reductive way, I think the 20th century. You know the most important discipline to study was economics and the most consequential profession, especially in the late 20th century was that of the financier, transforming and globalizing the world in all kinds of powerful ways. And in the 21st century that's changed the most important discipline is computer science and the most important profession is programmer. And when it is that our, our universities, you know, we see the transform the transformation on campuses that people majoring in record numbers in computer science. And if they only flow into one destination, you know the usual line of you know universities, training the next generation of leaders, putting you know most of your tech talent into Silicon Valley will never be enough. And just as you said Cecilia I think it's essential because of the great power that tech now holds that it's not only a technical skill with that optimization mindset at work. But it's, it's balanced by people who also have a sense of the ethical frameworks the value trade offs that technologists inevitably have to confront, and the social scientific and policy dimensions of all of these great questions. So the tech sector needs to be diversified in that way. And I'll just add, not by appointing a chief ethics officer to whom all ethical questions can be outsourced ethics has to come come to be seen as the responsibility of everyone. You know pushing ethics all the way down the stack as it were or all the way into the very earliest moments of product development, not at the tail end when you decide to release the product or not. And then as as you know the spirit of public interest technology also champions these extraordinary technical skills just as has been true in the 20th century with economists. These technical skills can flow into the full array of professional destinations, where the programming talent and the data science talents can be put to extraordinary good use. And then we create a kind of preset a professional pathways for social scientists and humanists into tech companies and for technical people into civil society and public agencies. I think we'll have a much better world. I'll just add briefly to that I think you know as an educator I have a little bit of a bias but I think you know education is really the key. And what we've actually seen in about the last five, six years or so is it many state levels, taking a real comprehensive look at computer science education in the K through 12 system as a way to get everyone more informed, about what computers can do technically but also thinking about the broad array of social issues that are involved in building computing technology, and then the ways that computing technology can be used in turn to address different kinds of social issues. And what we're seeing there is a real uptake in this notion that computing is something for everyone in the same way that you could think about we know when we study science we don't have everyone learn about physics because we need more physicists in the world. And more people study physics because they need to understand what's going on in their world and how does that analytical thinking apply in different areas. That's the same sort of thinking we should be taking for K through 12 computer science education. It's not that we want to churn out an army of programmers. What we want to do is raise the level of digital literacy for everyone, so that there's better understanding of how technology is affecting their world and how they can use that technology to make the changes they want to see. So I've got that I'm going to ask the first question from the audience and I would, I would agree with that I would would add two points one is everyone who wanted to do anything in the social sciences when I was an undergrad had to take econ 101 and 102. Which I had to know macro and micro like without that short really going to be able to operate in the world of social science and I hope that, particularly as coding becomes easier, as it will. I mean that's part of what we need to need to do that there's at least enough basic familiarity to then do one of the other things that will help which you all are modeling which is to work in teams. Right, you know, I, I'm never going to be able to code but if I'm literate enough, I'm going to be able to work with people and bring my expertise to bear. So our first questions from Roger and it's a question I hear all the time it's a great question. Would regulation or more societal input into big tech that's put the big tech of democracies at a competitive advantage or disadvantage to the big tech of autocracies and Jeremy I'm going to direct this to you and you've heard this argument so many times and when the antitrust argument comes up. The first response you get is all we're doing is tying our hands vis a vis China. So this is a genuine general version of that question. I mean I think we're at a moment where this, this sort of viewpoint is very much in the public conversation, and something that we need to engage critically because, as we think about the societal harms whether it's on the social media platforms issues related to the concentration of power and a small number of companies. A lot of what you hear is well what about China, if we can strain our companies in various ways if we address these societal harms, we will lose the race with China. And I think you see this in commission reports and efforts to drive, you know, greater funding to AI and the like. What I want to say is, this is a false choice to make. I just totally reject, and we reject this false choice. You know, if you think about what's unfolding with the impact of tech on society. That has huge consequences for our ability to compete with China over the long run, whether our democracy survives whether we can maintain a shared consensus in the United States about what we care about and how we think about one another and how we relate to one another, whether we have a healthy information ecosystem in which we can elect politicians and hold them accountable, whether we have an economic system that benefits, not only a very small number of individuals. You know who sit in Silicon Valley but a larger whole, especially in parts of the country that have been left behind. Absolutely existential issues. And if you think that we can effectively compete with China, without addressing those issues, then I think you're missing the point. And so our view is that let's not paint this as a false choice. Let's make smart decisions about the appropriate rules of the road for technology, but let's not pretend that we can pursue the kind of international competition that we need on the economic front, or on the political front, without addressing the core issues that ALS at home, and at the center of that is the regulation, and ultimately establishment of a framework for technology societal effects. We're dealing with the near term ones today privacy algorithmic auditing etc. Of course the big one is, is what is automation going to mean for the future of the workforce. And when you look at the data and you think about who automation is really going to affect. It is at first going to affect a population of folks who are at the lower end of the income distribution. You're going to see their jobs changed and affected. But one of the real changes is that our advances in AI are so powerful that computer programmers are going to be affected by AI doctors are going to be affected by AI in ways that we need our politics to get its head around. And the idea that we have to throw our hands up and say let's ignore these societal harms in order to pursue competition with China is just the latest tool of a set of tech elites and leaders behind big tech who'd like to say just leave it to us. We're the beneficent sort of folks in charge of this direction and we've got it all, you know under control and I think the moment that we're in is no longer do people feel comfortable, leaving it in the hands of sort of the tech elite to make those decisions for us so let's not accept this false trade off. We want our politics to often be presented with false trade offs. We saw this in the aftermath of 911. Right. Do you want your civil liberties or do you want to be safe from terrorism. And we've learned some important lessons from how, how boldly that trade off was presented to Americans over time. Let's not fall into that trap again. Here here. So I think in a shift in direction. This is a question from the audience. We've talked in this conversation about the role of the Academy, the role of kind of the individual. But the questioner asked what role can civil society organizations play in working for change, religious consumer, other types of institutions. Let me let me begin with that. It's a fantastic question because the civil society organizations are so often the ones left behind or left out of a set of considerations for how it is that democratic institutions and the marketplace operate. I think civil society institutions are in fact one of the most promising places where citizens can exercise their voices, because as Jeremy explained earlier, we shouldn't expect our elected officials in one fell swoop and one regulatory moment to solve a whole set of problems. And plus we have all the familiar forms of dysfunction polarization, you know, perhaps a shared interest in in tech policy or regulation but for very, very different reasons between the two parties. Well one of the most obvious and historically relevant places that citizens can exercise a voice is by assembling together in common groups but with perhaps different goals that are communicating their shared interests to a wider public. To work out within civil society. The kind of contested feelings and ideas and preferences that people have about a whole range of policy topics or areas where it has both an educational effect for other ordinary citizens where of course journalism plays a hugely important role. It has an important effect of sort of going upward into the formal institutions of governments communicating what the shared interests are and disparate and different interests are of different groups in society, so that these bubble up from otherwise small individual things. The last pitch on this just to say why I think this is so important is that the tech companies would far prefer a kind of discourse in which they said you should think of yourself as a consumer or a user of the technology, and if you don't like it, then just don't use it, don't like Facebook, then delete the app, don't like you know your search engine will choose another one. That kind of reductive thinking about which it's always got to be a small little note card of a small number of options presented to someone in the perspective of a user. Miss leads us from the collective power we can exercise, not just by voting, which is one important tool of course, but by participating in shared civil society groups, the kinds of things that are hyper local in your own in your own community. It doesn't have to be a check writing thing to the ACLU because you like what they do on privacy. It can be a small little community group in your own neighborhood. That's concerned about next doors, privacy policies, and the stuff that you see on its own newsfeed about your own neighborhood. So, I think there's extraordinary hope to be found in civil society groups and that's one of the most important pathways for an ordinary person who thinks the technology just acts upon them, where they can now exercise their own agency. To keep the questions coming. There are some really excellent ones. It's getting hard to choose. So, I'm going to choose one from Lily, who asks, how do we reverse the indoctrination that students receive while in stem majors, which contributes to their lack of agency when they enter the workplace. So one thing I'd say it's not just stem majors, I think students in general sometimes feel lack of agency as to what they can do and it's they feel that there are certain choices provided to them and those are the only choices that they can choose from. And part of understanding that there is a broader array of what students can choose to now go form or get involved with I'll give you kind of one example is the US digital service. That's a place where technologists have found that there's a real need and government to be able to bring tools from what they've learned in their classes but also from people who've been working in Silicon Valley for years to go and have a real impact and bring those digital tools to government. I think the indoctrination point is a little bit, you know, I would I would reframe that a little bit to think about what is the agency that students feel that they have in terms of the choices they can make what we're seeing on campus. One example is students choosing which companies they want to go to or even after they've chosen to go to a particular place to actually engage in protests while they're there if they believe that there's policies those companies are engaging in that they don't like. So I think that's the awakening that's happened that we've talked about a little bit before is that it's no longer just seen as there is an industry in which someone goes and plugs into for a job and they have to do whatever the rules of that industry are. And individuals find two courses one is they can go to that industry and play a much larger role in their as their agency as technologists and or whatever other role they play. But secondly, they can actually choose a different path which is forming a nonprofit or an NGO to be able to actually address some of these issues and we're seeing more of that for example. I think it's him that Gebru who was just a Google and controversially was fired is actually starting her own organization to do research around the harms of technology, and that's a whole different way that she can bring her expertise to be applied to the sector. I think that's something because obviously as as a social scientist I'm new to the engineering space and part of teaching with Robin Maron and engaging students in STEM fields has been an introduction for me to different mindsets and different approaches and one of the things that that makes me in that environment is how much our students struggle with questions that don't have a right or wrong answer that there's something in the in the in the DNA of computer science and engineering, maths and stats that underlie it, where students have a culture rated to the notion of of questions having a right answer and and the path being more or less efficient to the right answer and people getting credit for that. And so part of what we do in teaching students is give them all sorts of questions that don't have a right or wrong answer. And you can imagine how frustrating that is from a grading perspective, because people are like, Well, did I get 100% or not and if the answer is I didn't get 100% not the right answer and I just want to give you one example of this. You know, when we're working with our undergraduates we often frame dilemmas for them and have them exercise their collaborative and democratic muscles to think about these technological dilemmas. What we gave our students last year was to basically think about the advent of these new technologies that would enable universities to optimize advising and mental health service provision for students on the basis of data that's gathered about students on a regular like what time did you re enter your dorm every night this week, which is embedded in the key card system, or how often have you been to the library, this quarter, or what is happening when you go to the dining hall how long do you stay there. How much are you checking out like how you know the volume that you're you're eating and the like. These are things that are all available to the university and you have all sorts of technology companies who are like, look at this extraordinary data we can optimize the provision of advising services to individuals and we can catch people right before there's a mental health crisis. Obviously this raises all sorts of concerns for students about the extent to which they exist in a surveillance society on campus, and we're not even aware of it. And so, in putting to students the question not only of what's technically possible with this data, that is, could you develop a predictive algorithm that actually did a reasonably good job of helping advisors engage with students before a crisis reaches a boiling point. It also raises all of the uncomfortable questions that don't have a right answer about should we do this. And what are we trading off in doing this and who benefits and who is harmed. And I think, developing that muscle memory in in stem students, right, breaking them away from the notion that there are right answers and wrong answers black and white, but actually there are just better and worse answers to hard questions. The way that we determine whether those are better or worse answers is by exercising the, the, the, the sort of act of explanation and defense of these different viewpoints to one another and to people with different views. That's what hasn't been happening enough. And we need to create the dynamic for that to unfold on campuses and then inside companies. I think it's fun that and I will say, as someone else teaching the residents of, is it right or is it wrong. You know, I think this is the, this is our normative training I'm going to offer up a question from an undergraduate for Rob says is an undergraduate that studies tech from a philosophical perspective I often get asked why a normative lens. What is your go to argument in favor of philosophy. Well that's a question right, you know, goes right to the heart of how I think about showing up in a classroom at all or even why I chose philosophy as something to do with my life. And, and I'm going to be totally sincere and but maybe a little, you know predictable which is that the kinds of moral choices that confront us as human beings are inevitable. We have to make conflict written choices with a welter of extraordinarily important values in the world. Tragedy in the philosophers view my view is not, there's a really good thing that can happen to you and there's really bad thing that happens and it's tragic if the bad thing happens. Tragedy is that there are multiple good things that we all want in a life, and we can't get all of them all at once and so we're doomed to make choices amongst the things that are actually valuable. And so, what makes philosophy important to a technologist is not that there's like a calling for technologists alone to think about the choices they have to confront as a technologist or as a human. It's a common human predicament to confront these choices so what I would say to anyone is, do you prefer to be morally sleepwalking through life. Do you prefer thinking that you had a moral compass all set and you no longer need to reflect? Of course not. The Socratic life about the unexamined life is not worth living is the motto here. Moral sleepwalking is a disastrous approach to living. So wake up and confront the moral choices that beset all of us and get in the conversation. I'm going to kick it over to Ann Marie, but I do think, you know, at some point we could have a whole conversation about the declining majors and the importance of perhaps escalating what isn't what used to be in the 350%, but Ann Marie over to you. I'm just smiling at the idea of a class called Moral Sleepwalking. I think that maybe a seminar, but I think it's a great title. So I have a question that that a couple of you may want to answer, but I'll start with Jeremy. And it says, I'm wondering if there's a way to collaborate with other local universities who have major CS programs like SJSU and I'm going to assume that San Jose State University. Because it says that arguably puts more technologists out into Silicon Valley, and I know Jeremy you run the impact labs and I thought you might want to talk about how in many ways a version of the model the three of you have are putting together at the the curricular level might might take place more at the collaborative level. So whoever sent in that question and whoever's interested in collaboration, you know, look up our email addresses and send us a note, I think, when people have asked us what comes after the book and it's something that we've been thinking about a lot. I think our answers, by and large for the three of us focus on changing the culture of tech, cultivating an ethic of responsibility and tech not because we don't think the regulatory questions are important they're absolutely important. But there are a lot of actors focused on shaping that regulatory space. And we think that that sitting at Stanford and sitting at a set of institutions that train future technologists, there's incredible work to be done in creating space for the kinds of conversations that we're talking about today to happen on campus. We know that students want to have those conversations, but sometimes students need to have a license to have those conversations. And we also know from the work that we've done together how challenging it is to have those conversations that, you know, the old model of the sort of ethics class in engineering or in computer science where you primarily focus on the role of an individual engineer not to do harm and you think about the construction of nuclear weapons or the failure to get plans right when a bridge was built. It just isn't up to the current moment that we have where technologies are generating harmful effects that are in part about the product but in part go well beyond the product. And we're thinking about resolving those issues isn't just a question of getting the design right, but it's getting a design that makes sense for a democratic society that disagrees about what might be the right path forward. And so, we just see ourselves as part of an emerging conversation some is the public interest technology network, and Marie that that you described that New America has been involved in Mozilla, a media and others have been invested in notions of responsible CS, bringing together faculty across institutions. I think there's an extraordinary opportunity for those of us who have the privilege of being able to teach future technologists to generate and energize a set of conversations that could have sweeping implications for the future of technology, maybe not the power of the company for five years. Let's let that be a regulatory enterprise that's basically unfolding. But I think we have to break out of university models that treat people and silos that that are built around our construction of disciplines. You know, part of what's unusual here is that Rob and I, a political scientist and a political philosopher are teaching in the computer science department. What does the political science department feel about that well they're wondering why I'm not teaching more political scientists and I say well all those political scientists have become computer scientists. So I'm going to teach them in the computer science department to think about politics to think about institutions and to think about democracy. So the things who want to think together about this, get in touch we're thinking about sort of standing up a set of initiatives that are really cross institutional focused on really cultivating this new ethic of responsibility, and we're looking for partners everywhere. Great. So, the next question I'm going to direct to you may run it comes from from myrta who is a fan of code in place, asking for suggestions on resources for non tech people to better educated and form ourselves about tech companies and practices. Thank you for the reference to code in place. And if you took part either as a student or a section leader we really appreciate it code in place was just a program to get volunteers from around the world to teach people around the world about computing. And it just warmed the cockles of my heart how many people just came together to actually do that purely as a volunteer effort. So thank you. In terms of resources, there is a number of resources that we've made available to our class which are available publicly now. If you want to go to the website at CS 182.standford.edu, and all the materials are there. There are some materials that we actually had custom written some case studies that we got journalists in the field to help in terms of framing and writing that we've made available through Creative Commons license and we've also developed a set of study questions around them. So the case studies are available on the on the website actually for the website for the book, we have some additional links there where we're all going to have the links to the case study questions, but the space is evolving pretty quickly and so two things to keep in mind is, there are some readings that we have in there that we think of as kind of more traditional or that help get people from different disciplines to understand each other's language and to think about things, but that set of readings that we have we actually update every year because the arena of these changes is, or these, these technological issues is changing so quickly. So we will keep that information up to date. So at UCM the Association of Computing Machinery is also putting together repository of materials around ethics for computing, and that would be another place to look at. I'm happy to open up with, with another question, which is from the audience. Is there a definition of public interest technology that differentiates extremely from commercial interest technology that sometimes serves the public. Open that up to any of the panels. Maybe Anne Marie or Cecilia or Tara you should feel free to give a crack at an answer answer to you've been championing this work for a long time. I'm happy to chime in but would be great to hear from you as well on this. So let me give it a start and having worked at a definition with a number of professors including Jeremy and anyone should, you know, Hannah shank and I in our book take it. A very clarified definition of public interest technology I will say in doing our research. We had really provocative provocative questions put to us like, didn't the folks inventing the atom bomb think it was in the public interest. It's like your public interest sits. It gives a lot of editorial control. I think, you know, we lay out that public interest technology isn't in fact a technology. It is a practice because by the time anyone publishing a book here knows by the time you pick a technology it is truly obsolete. But to sort of say that the practice of technology needs to be serving the public interest and we anchor into kind of three aspects of that but I do think this is a, this is a complex question and so happy to hear from, you know, Rob, Jeremy and Marie others on it. I love this notion of a practice and Tara we spent, you know, many hours deliberating over these definitions as public interest technology university network was being built. I think I have some very clear senses of what it's not. I think there's a way in which public interest technology has has sometimes been interpreted as like let's just bring the technical folks in to build, you know, great websites for government or build, you know, great mechanisms for learning from government data. And some of the early enthusiasm and the rose colored glasses about technology and what technology brings to the table I think we're really in that spirit. But the reason that focusing it on it as a practice as an exercise and thinking about not only what new technologies bring to the table but also the consequences of those new technologies for society not just the private interests of the companies that build them, or the interests of the agencies that deploy them if they're in the public sector is to recognize that we have to grapple with that hard question of what's in the public interest. And, and ultimately one anecdote in this respect it's one of the first questions that I asked students in our in our in the class that we teach together. I asked them. What is the public interest and I remember the very first year that we taught this course a student raised their hand and said well, I'm a member of the public so what's in my interest is the public interest. And I think that just underscores the thinness of of our, our ability in this day and age to grapple with notions of the public, and to separate what are the kind of collective result of deliberation and debate in society about what we want from what is really the interest of a private company. Right, in, you know, you think about the mission statements of Uber or Facebook or Google that that aspire to what are really public ends but part of what we've been discovering over the last five to seven years, ultimately is that they are not primarily focused on public ends they are driven by private ends. Sometimes those have extraordinary positive externalities for societies. Sometimes they have really negative social consequences. And so it is the responsibility of our democracy to grapple what with what the public is. And, and part of the reason I push back on simply the deployment of technology for public spirited goals as being the total of public interest technology is that I think there are responsibilities for private companies as well with respect to these social effects. And so I never want our focus on public interest technology to be reducible to that subset of students who raised their hand and say, I want to work in the federal government or I want to work in a nonprofit I want to work on criminal justice reform, because I want all the students who aren't choosing to raise their hand for that to also feel that there's something called the public that they have to care about. They have to be responsible for, and they have to grapple with in the context of the work that they do in the private sector. Can I add on here. I'll try out something I've only, I've only sort of offered up on a few occasions and I'm still working through it so I want to say it's I guess I'm tentatively offering this up. I kind of elevate this difference between a private interest technologist and a public interest technologist to a kind of slightly different dimension or space which is, I'll go out in a limb and say, as I've come to understand the, you know the kind of orientation and practice of computer scientists somewhat over the course of the past five years. I often feel like the questions that AI scientists have asked themselves or technologists have asked themselves is how can we get machines to do things as as well as or perhaps better than humans. Can we get a machine to beat the checkers champion can we get it to beat the chest grandmaster can we get it to do out can we get it to do things better than humans can. I feel like that's just a profoundly, you know, mistaken framework to impose upon technical talent in the first place or to ask of technical talent in the first place. Why do we play chess together with each other. Well yes we have a set of rules which identify and Marie if you're the winner or I'm the winner or there's a draw. And there's a competition that one simple level of chess but we play chess because it coordinates human activity in a way that develops our capacities as humans. When you develop a machine that can defeat Gary Kasparov and say well done technologists way to go deep blue. You've only done the first level of defeating someone but you have completely ignored and even undermined the second and more important level of coordinating human activity, allowing for the development of human capabilities. So for me a public interest technologist is someone who stops asking of themselves, can we get a machine to do something better than a human and asks of themselves can we get machines or technology that coordinate human behavior and and help to amplify or augment human capabilities. I was saying I love that and there's no better argument for having philosophers engaged with technologists probably than that that kind of different thinking I've never actually thought about it that way. And it's a very rich vein. So we are 10 minutes to time and we're going to close out really by asking a question that you should ask of all authors when they have a new book and again. The book is a terrific read and and a teaching tool and a provocative, you know, general audience book I can assure you, having read it that this is a book you can give to anyone and anyone can read that's the point of it. But I want to ask each of you in turn and Mehran I'll start with you. What, what is the one thing that you hope your readers will take away from this book and obviously each of you can have a different answer. So Mehran let's start with you. I think the thing I'd like people to take away is a sense of their role in the bigger picture, a sense that everyone plays a role and you know part of the book was written for technologists to say we use these are issues you need to think about when you're in your company or you're creating some new venture, but it's much much broader than that it's also a set of issues for people to think about from different sectors as to how technology impacts them it's for policymakers to understand that if we have things like AI coming down the road, we're going to need to reskill people we're going to need to have policies and areas that are not about technology that are about things like education to be able to mitigate some of the impacts they're going to be resulting from technology, and ultimately to think of people's roles as citizens to understand that they play a role in how these issues are going to play out that they're not powerless, and that by understanding what's actually going on they have a lot more power than they think. That's terrific Jeremy you're next. So I think there's been a myth of Silicon Valley that we need to bust. And the myth is, is this embrace of disruption without regard to the consequences, the idea that move fast and break things is, is something to celebrate and there's no impact to that if we want the benefits of technology. And I just think it's dead wrong. And it's generated a passivity in our democracy and a paralysis, because people act as if the effects of technology on all of us are somehow fixed or preordained. And I don't think that's right at all. The effects of technology on society are a function of choices that we make. There are choices about the technologies that we design who they're for who they're not for their choices about how we see technology interacting with human capability and human judgment. And their choices about the steps that we take in our politics to mitigate harms or not mitigate harms to pave the path to the information superhighway through a regulatory oasis, or to anticipate potential harms look for them and adjust to them, either through direct engagement in the tech sector, or through the kinds of complementary policies that Merron described. So there's nothing preordained about the path that we're on. But I think we've been told that there is, and that we face this false choice between benefiting from technology and all that it brings to our lives, or bringing a stop to the innovation in the United States. And I think we reject that. And so part of that is about reasserting agency, but we have to come to believe that that that that storyline is just not true. No false binaries. So Rob bring it home. Right well I'm going to go back to the philosophical impulse in me so you know for me the main message of the book is that for anyone who reads it, we all need to wake up, and I mean that in two respects, we need to wake up to the big power grab that big tech has taken over the last 10 or 15 years, where a small number of people and a small number of companies are exercising this enormous amount of power over us, leading as Jeremy just described to a certain sense of passivity on the rest of us. And secondly wake up to the, you know the dangers of moral sleepwalking technology in codes within it a set of values that are not neutral, they're not scientifically objective in the sense that they escape from all human bias. They are themselves a set of values and choices made by human beings that are now dressed up as machine machine decision making. We all need to wake up to the idea that we have to confront trade offs in life, we have to identify the values that we care about privacy versus personal safety and national security. The use of automation and the value of human agency and actually doing things ourselves or the effects on human welfare and material well being and on and on the book tries to enumerate and clarify what some of the value trade offs are. And I want everyone to wake up to those and to weigh in now with their own views. I love that. And there really is a common thread in all in your answers, which, which the book directly addresses which is, you know, these questions are for every run, everyone right we can all participate we must all participate. I want to just end where where we started by pointing out that because the three of you collaborated because you have all may runs technical knowledge and philosophical knowledge and political knowledge, you're able to take these complicated questions that have been so that they have been this veil of mystery, you know, you know, computer science if you don't know how to code if you don't know how to do data analysis you can't really participate and you've made it clear to know this is like anything else in a democracy we regulate all sorts of complicated things. And yet we're not all financiers and we're not all the doctors and tech is no different. So, again, it's, it's a wonderful book. It's a book for anyone. And it's also again, it's a great read it's written in a very accessible way with lots of great stories. So with that, I want to thank our authors, Jeremy Weinstein and Rob Riesch. And also my fellow moderators Cecilia Munoz and Tara McGinnis. And as always, Vantisha flood who has helped us put this on and the new America events staff. It's really without you we simply couldn't do this. And thanks to you the audience.