 I'm Shobita Parthasarathy and I'm Professor of Public Policy and Director of the Science, Technology and Public Policy Program, or STPP, as we call it here at the Gerald R. Ford School of Public Policy. STPP, which is a co-sponsor of this event, is an interdisciplinary university-wide program dedicated to training students, conducting cutting-edge research, and supporting communities and informing policy makers on issues at the intersection of technology, science, equity, society, and public policy. If you'd like to learn more about it, please visit our website, stpp.fordschool.umich.edu. Today's event is part of the Dean's Symposium over the next two days, focused on policy innovation for our times. I hope that you'll be able to attend some of the other wonderful programming that we have planned. You can get to that at fordschool.umich.edu. Before we get to the main event, a few thank yous. I'd like to acknowledge our media partner, Detroit Public Television, and our promotional co-sponsors, the University of Michigan Science, Technology, and Society program, the Michigan Institute for Data Science, the Department of Computer Science and Engineering, the Office of Graduate and Postdoctoral Studies, and the School of Information. Finally, I want to thank the Ford School staff team, Daniel Rivkin, Julie Burson-Grand, and Cindy Bank, who have worked tirelessly to make this event happen. There is perhaps no area of policy that needs innovation in our times more than tech policy, and nobody better to speak with about tech policy than our guest, Alondra Nelson. Professor Nelson is the Harold F. Lintner Professor at the Institute for Advanced Study in Princeton. She served as Deputy Assistant to President Joe Biden, Deputy Director for Science and Technology at the White House Office of Science and Technology Policy, where she also served as its acting director. She's a distinguished senior fellow at the Center for American Progress, and in 2023, she was included in the inaugural Time 100 list of the most influential people in artificial intelligence. In addition, she's an elected member of the American Academy of Arts and Sciences, the American Association for the Advancement of Science, the American Philosophical Society, the National Academy of Medicine, and the Council on Foreign Relations. Hashtag goals. She is the author of several books, most recently, The Social Life of DNA, an award-winning exploration of the social implications of direct-to-consumer genetic technologies. Professor Nelson and I will chat for a while and we'll then open it up to audience questions. If you're watching online, please click the link on the webpage. If you're here at Weill Hall, please use the QR codes on the yellow cards that were distributed in advance. You should see some of them near all of your chairs. If you're posting to social media, please use at Ford School and hashtag Dean's Symposium. That's with two S's. Your questions will be collected and our two intrepid Ford School students, Farah Pitcher, an MPP and STPP student, and Yael Atsman, a BA student, will ask them. So now let's get to chatting, Alondra. So maybe we'll start big. We're at this very strange time when it comes to tech policy. I think both of us have labored in obscurity and wondered as social scientists why anybody would study science and technology, but now it's there multiple times a day on the front pages of Choose Your Newspaper. Everybody is talking about AI. I often try to tell people that it's neither magic nor an asteroid. What is one thing that you wish people knew about AI? I love this question to start us. You stole my, usually it's not magic, it's usually my go-to, but I'll take a beat to say my own thank yous and I'll come back to something else. I'm so glad to be here and I'm so inspired by Dean Watkins Hayes' leadership and I'm so inspired by a conference on policy innovation that puts family policy and tech policy and climate policy and racial justice policy and public policy together because they should have always in our policy conversations been together in conversation because they're so imbricated in each other so I'm so grateful for her leadership and for her vision that is bringing all of these important vital policy spaces together. So having stalled, I will say, I mean I think I would say that the one thing that people need to know is that AI is a tool. It's math, it's statistics, it's hardware, it's software and it's a very powerful tool and so we mystify around it, there's a kind of enchantment around it but fundamentally it's hardware and software and it is for those of us who work in the space of science and technology policy or science and technology studies, Ferra and Yale who are sitting here know this well. We make tools, we imbue them with meaning and that means that we can govern them, that means that we can set the course of how they'll be used, that means we can regulate them if they need to be regulated and that means that we have the ability to create the good things that we want and that they're not just going to happen so it's more likely that some of the bad things that we anticipate might happen if we do nothing but it's also likely that some of the good things that you know big universities like Michigan are hoping might come in the research space or in the policy space through the use of AI are certainly not going to happen if we don't create the conditions of possibility for them to happen. So all of this to say that the technology is just not going to do it. A chatbot alone is going to do no AI for good, a chatbot alone is not going to cure cancer, you know an algorithmic system alone is not going to destroy the world like all of these are about human choices and human design and so the one thing I want people to know is that we make AI, we can govern AI, we can direct it to the ends that we seek for it or not. So and in that spirit right you are the chief architect behind the AI Bill of Rights and I you know we were talking earlier and you said that one of the things that you thought was incredibly important about that AI Bill of Rights is to make it accessible to people so that they can actually engage in the way that you're that you're suggesting. For those of you who don't know about the AI Bill of Rights it sort of establishes certain principles for the development of AI that includes safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback. And so I'm just wondering you know as an academic in the policy process again somebody who sort of is thought in an abstract way or maybe you know scholarly way let's say about tech and then being in a position of putting it together what are you the most proud of in putting that together? I'm proud of that we created a document that people could read you know that like the general public could read and that we did it in a process that included the public. So you know we started so it was released the blueprint for an AI Bill of Rights in October of 2022 and October of 2021 we released an op-ed and wired that said American public were doing this policy process at OSTP the Office of Science and Technology Policy at the White House and what do you think about AI? What are your concerns? What are your aspirations? What should we be doing? What should government be thinking about? And we put the email address for the office for this project in this wired op-ed. So it was really like anyone could write to the White House about AI using this email address that we regularly checked and responded to and the process also included lots of sort of town hall so we took we borrowed talking about policy innovation we borrowed from the FDA model so if you ever watch the policy nerds here and FDA hearing they will have like a two-minute clock so you can sort of say your piece for two minutes. So we did that and people could just talk you know about what they would recommend that we do we did it at different times of day to capture people at different time zones you know and we also had just panels on different subjects you know work and you know economic opportunity algorithmic discrimination so we began it and worked for a whole year as it being a very public process and I think we're you know it was a big team of people both in the White House and across the federal government working on this project and we were all really committed to writing with clarity not using jargon writing the document in a way that was accessible distinct from a lot of I think policy documents it has its own website so it has you know you can print it out as a PDF which you're more likely to do but it was important for us to create a website that was more readable it has icons it's a bit more I mean as far as White House websites go you know it's a bit more kind of engaging it's a low bar admittedly a low bar but all of that was deliberate because you know we we were serious and we know we knew you know I haven't worked in government in over a year but President Biden is very clear about kind of kitchen table conversation like you should be able to talk to regular people about things like if you're a policymaker no matter how complex it is you should be able to sort of tell people the brass tacks for their lives so that's what we set out to do it was really hard I mean it took you know hundreds of hours of editing to get out of the space of for those of us who were engineers or academics that more abstract language and sort of talk and you know make it plain spoken so I'm actually quite proud of that and it has meant that it has been therefore useful in a lot of different places so that's the sort of second like the ancillary I think a source of pride for this project so it's been used by middle school students in the state of you know in Massachusetts so people are using curricula it's been taught and this is a case study in the Harvard Business School I think colleagues here are teaching it you know high school students are using it it's also been used in a lot of state legislatures so there's there's a an AI Bill of Rights Bill and Connecticut that's already and implemented into law and is becoming they're now sort of trying to figure out what that means for Connecticut so that was signed into law by Ned Lamont last summer there's now more recently sponsored by a Republican state legislator and Republican Oklahoma there's a document a bill called the Oklahoma Oklahoma's AI Bill of Rights and it has almost all of the five principles that we have in a few more and I think we were able to in that year-long process talking to the public distilling from best practices and experts some fairly common sense things the things that you said algorithmic systems AI system should be safe and effective I mean that is not you know I think a part is a polarizing or controversial you know we wouldn't think a controversial opinion people shouldn't be discriminated against right you know none of us want to be discriminated against and and have it happen in a way that you can't have you can't ask somebody about it right that you can't say what happened here and who made this decision what made this decision how was it made and who do I talk to if I don't disagree you know if I don't agree if it's about a sort of substantial something substantial in your life maybe not you know a Netflix recommendation or something if you don't like the movie that Netflix is serving to you but for something you know consequential a job health care these things matter so you talked about policy innovation and and this your approach as being also really policy innovation and it ties to a mean a sort of a set of literatures around consensus conferences and other kinds of deliberative democratic approaches as being experiments in democracy that are useful in all sorts of places but I'm wondering in that vein often we don't think about technology policy as being the place for experimentation in deliberative democracy because we tend to think that's a space where experts should be telling us what to do and by experts we tend to mean technologists so I'm wondering if you could spell out especially given the context of the other talks at this symposium what is it that those public comments and that public engagement what what knowledge did that provide to the decision-making process what were the perspectives that were important that perhaps were eye-opening that shaped like you can maybe see a through line into the final question so I think if we took only a conversation with technologists you know you would have had a conversation about a I policy that was you know how many parameters is the model right and you know kind of very technical questions around models which are very important like you cannot do effective technology policy without understanding fundamental things about the technology so but that's not all it is right it's also about particularly with if you're thinking about generative AI like what are the myriad use cases and there's different policy stakes and questions and decisions that might be made based on whether it's going to be used and you know in a setting like for healthcare where we have you know privacy regulations like HIPAA or if it's going to be used in an educational setting in a K to 12 setting where we also have you know laws and norms and regulations about children's privacy and how children should be created you know treated in the classroom so all of that context matters as much as or maybe more than you know sort of technical things about the sort of you know the AI use case that that we're talking about so I think that was that part was really important but it also goes back to the mystification so we we don't say to to the public if they want to have a role in housing policy if they want to have a role in health policy well you don't have an MD you can't weigh in here about how your doctor treats you or the sort of outcome of clinical trial you know you're not an architect you can't weigh in on housing policy right so we've allowed in the space of technology and science for there to be a kind of mystification around the fact that these are social you know phenomena that the and that they touch people's lives and you know certainly most of us including many of us in this room do not have PhDs in computer science but we very much have are entitled to have an opinion about how powerful algorithmic systems and tools are impacting our life and how did you think about the fact that the field of AI is changing and our knowledge about the field of AI is changing so I think about for example you know the emerging discussions or sort of evolving discussions about explainability for example right so there's this idea that we should make explainable AI except that it turns out that that doesn't actually solve the problem in some ways it increases our over reliance on faulty AI or you know it might work you know one of my graduate students works on the fact that you know explainable AI doesn't really make sense in a con in the Indian context where the relationship between the user and the technology isn't the same as in the US where we're accustomed to being informed consumers right with you know just this morning they were talking you may have heard about the nutrition and the labeling for broadband right that's also a uniquely American phenomenon a way of thinking about it and so I'm just wondering I guess there's two questions in there one is about but it's really about kind of how do you think about the fact that this is our knowledge about it is evolving and and the different contexts in which these technologies are being implemented is going to change what we know about it how did that how do you think about how did you think about that when you were developing the AI Bill of Rights but also how have you thought about that as the discussion has evolved yeah it was what was very much top of mind and the development of the AI Bill of Rights because what it the AI Bill of Rights doesn't do which is a bit different and I think in innovation and least in Washington for tech policy is that it's not about trying to to it's not focused on the object right so it is not focused on a particular AI system or a particular algorithm or a particular tool or a particular model it is actually focused on the use cases and the outcomes of those like where they used what are the norms standards you know technical standards policies guidelines laws that pertain to the domain of use and then what are those that pertain to sort of harms or things that might happen downstream through the use of those so that's a very different way of thinking about technology policy from how we might have thought about you know governing television or spectrum you know things that are about the object themselves and so it was I think introducing a shift in tech policy that was more about what are the outcomes that we want with the way outcomes we want to prevent discrimination and harm for example and one of the ones that we want tools that are safe that enable people to do their work that enable or enhance economic opportunity for people and and that is and then and then what do the tools need to do or what do we need to know about them sometimes explainability sometimes risk assessment sometimes auditing all of which matters in different contexts and a lot of which needs to be built systematically so that was the kind of shift that we were trying that we were hoping to occasion and I think we succeeded in doing to some degree and the kind of broader conversation and I think the explainability I mean it's a great question there's also particularly for sort of foundation models you know the conversations about alignment and you know is the is the model aligned with human values I mean all of these questions are often highly technical questions that are very in the abstract actually the system you can't explain it probably the system may be aligned with what the designer or the developer intended but that is actually doesn't mean that it's you know a system that's aligned with you know being a safe system or or sort of enabling people to use the tool in the way they want so I come to AI from the space of working in human genetics so both my monographs in some regard or about sort of human genetics and so I so and you know that the sort of human genetic case is the like early big data case it's just like we've got these genomic data like what are we going to do with all this stuff and so you see a lot of innovation around data and then well first computing right the human genome project starts as a project of health and human services in the Department of Energy because the Department of Energy has these vast supercomputers even in 1990 so by the time you get to like the contemporary moment it's sort of you know how are we finding correlations in these vast datasets and early use cases of AI and so even in those but there were all sorts of questions particularly in my work about thinking about ancestry and what these tests say about the social construction of kinship and of ethnicity and race and so the question would be with these algorithmic systems that were analyzing DNA like they do work right you sort of ask an algorithm or you ask a tool or an index to answer a question and they answer it but that doesn't change anything about the sort of cluster of social and political issues that are exist around it or might be that might raise up as a consequence of the use of these tools and so alignment narrowly or explainability might say the systems worker doesn't work but that's really only the beginning so I think the other really important piece for tech policy right now is moving to a space that's not only not only about the object the object matters the parameters matter those sorts of things but it's about much more but also moving to us a way of doing policy that's more agile and iterative right so that you are you might have to come back to the policy so I've been encouraged so NIST which is the National Institute of Standards and Technology at the Department of Commerce has released an AI risk management framework and it's a version 1.0 right so version 1.0 might means that there might be a 1.2 or 2.0 that that there that there's an ability to perhaps go back and sort of rethink the policy and I think we're gonna have to move out of a space of particularly because we're not it's hard to get legislation like we this what this may this Congress though that's currently in office might be the Congress to pass the least legislation in history ever so you can't wait every 10 or 15 years in tech policy to pass new laws you know to hope to get up to date so I think a sense of like versioning or a sense of kind of iteration around policy that you're gonna have to go back in six months or a year revisit is it still but you know is it right-sized for the phenomenon that we're dealing with is is gonna have to be a different way instead of thinking of policy as these kind of edifices and quarterstones that you know you kind of build and build on we might have to be a lot more nimble so I want to get back to that in a second because I think that's also a very innovative way of thinking about policy and we're certainly in the US not structured to do that really but I want to return to something that you were saying before about the way that your work on DNA right and genomics has influenced the way you think about AI I think that's such an important point that's often lost we often see a new technology and say oh well we can't know anything about this technology and therefore that has policy consequences right we need new policy we need to hold like what are we gonna do right we get it's all unintended consequences we all right so I'm wondering and this is you know yeah I mean it's it's it's quite difficult to make that case right in fact your knowledge around genomics is it crucial to understanding AI and it gives us some traction and I'm wondering have you seen that kind of push back oh well you don't actually know anything about AI you have a specialization in genetics or we don't know enough about actual AI in order to make policy how do you respond to those kinds of challenges I mean you did one thing which was to say it's not about the object yeah it's about the consequences but I'm wondering how do you manage that in real time oh what a great question I you know I think you're always so as a it's a black woman as a woman of color as you know working in science and technology policy you're always managing that in real time and that's not about algorithms I mean you're managing that about like botany right people are like you couldn't possibly know so you know I think that's just a kind of structural truism of the kind of world the sort of intellectual space that we've chosen to to work in but I also think that the the real gift of coming to big data and then you know we used to call big data now AI through genetics is that I'm always thinking about data as people right so I don't you know I don't because I I'm always thinking about the people that the data comes from I'm thinking about the fact that the data might be an individual's person's you know sequence genome but that that individual person in that genome sits in a broader kind of genetic community right even even people they don't know right there are sort of and so you're implicating in a kind of social network analysis way all of the people have some relationship to you with with regards to data so the the genomic space I think allows you to think about the system it allows you to think about what might be the stakes for people's you know for individuals and the way that we analyze data use data share data don't share data so I have found it in a tremendous sort of source of insight and special insight that I think sometimes in policy spaces allows me to get a little quicker to the stakes of particular data privacy issues or policy issues than others because it's always people to me I mean and it's also because of studying human genetics right like not fruit flies not rice I mean there's a very kind of different I think way in which you have to think about that and I think I mean you know on the one hand there are at a place like the Office of Science and Technology Policy it's mostly people who you know know science in some regard and so I think there's it's probably less of that kind of skepticism because like either you can have the conversation or you can't like either you fundamentally know what you're talking about or you don't and I think in the community of you know fairly expert science and technology folks you know you know what are you don't I think you're you know the the sort of skepticism was more likely to come in non-expert communities or less expert communities right like you know what could you possibly you know you know know about big data like what indeed you know you mentioned that this is not a particularly busy Congress I mean I guess they're busy but not with policy and I'm wondering where do you see hope in terms of policy making is it in something like the EU AI act does that you know give you some hope what about bottom-up approaches you know there are sort of different states and different communities that are trying to experiment with with public policy in this space what what gives you some hope I mean partly what gives me hope is the great question you began with which is we've been talking about AI a lot in the public I mean you know I think you use the words we we toiled an obscurity you know for many years so I have a lot of hope by virtue of the fact that it's like a public policy conversation that we could not even have imagined having two years ago and it's a really important conversation and I'm glad that you know members of the public the press like folks are really like lots of stakeholders are engaged in this conversation and understand that it's important even if they maybe don't think that they fully understand it or whatever so I think that's incredibly important so that actually gives me optimism it's no longer you know a few of us in the policy space it's really the broader public saying like do we want to do this and how far do we want to go with these tools I was talking earlier with some of the STP students about some of the polling that we have in the United States around attitudes around AI and they're not it there's not great there's a lot of skepticism you know and I think it presents a challenge for companies and for researchers around adoption and whether or not people are going to use these tools and that says to me that there's a different kind of accountability now right like you want me to buy you know pay you $20 a month to use this spot or you want me to you know you know not work with my radiologist and instead allow you know you only to use this this algorithm to scan my you know my my images should I trust that how should I trust that so you know I think that the fact that people feel that they can have questions and be skeptical is not where you want to end up but I think it's a really good place to be at the beginning of a new powerful technology so that actually the skepticism gives me hope that we can get to a good place and that people are will be asking hopefully hard questions of legislators and these sorts of things the policy at the state level and even at the municipal level I mean I think you know places like Cambridge Massachusetts were some of the earliest to have some initial legislation around facial recognition technology New York City or I'll be tomorrow at a conference that the Attorney General Latisha James is putting on an AI is you know has been has done some interesting legislation around AI so that's you know I think that's hopeful because there's a lot of innovation Illinois actually has had a biometric privacy law for a very long time I mean probably five six years now so there's lots of kind of pockets of policy innovation happening in the States and you see now so last weekend in the like you know dark of night well we got the draft announcement of a draft bill for privacy the American privacy rights bill you know it's a draft legislation and you know that who knows what's gonna happen with that bill I'm that makes me the most hopeful that we will get the kind of foundational data privacy that we need because you know it's one of the kind of cornerstones when people say like how are we gonna govern AI like that is a pretty major piece of the answer to that question and but you know there's a lot of states that have privacy laws and so how are you going to navigate that and are we going to take the innovations from Illinois and from California into that and you know how's that gonna work out so I think some of the states help us to make better better federal policy to the extent that we get there and then I will say you know it is caused the kind of stalemate in Congress I think is rightly in some ways interpreted sometimes as you know executive overreach but it also means that the executive has to act right so when you have the release of chat GPT in November of 2022 and you know some people are saying the robots are gonna kill us and it's out of control and what are we gonna do and you know that there is not really a hope of getting legislation you know I think the Biden Harris administration had to act and so you know what that is is then the president's 111 page executive order I think the longest in American history on AI that is like a whole of government approach to AI governance and I think that's a good thing I mean you know you want obviously legislation that sticks you know and executive orders have the force of law for the time that that president you know is around or until it's rescinded by another president but it can also do a lot of work in the interagency as this executive order is doing to begin to govern things so I'm optimistic about I'm more optimistic than not and we've already frankly been bank shotting off of a lot of the EU legislation as you this is the area you know better than I but you know this is what Annie Bradford calls the Brussels effect that you know they've got enough market power and other kinds of controls that what happens you know if a company I think Bruce Nyer talks about this like maybe he was working at Microsoft or IBM I can't remember at the time but you know sitting and kind of design lab and saying well the EU is saying that we have to do it this way and provide some transparency or some you know privacy protection do we want to build a new system for the EU is it even worth it or do you want to just have one system that has a bit more privacy protection and oh well I guess we have to use it in the US as well and so there's been those kinds of things even with the GDPR which is you know the general data protection regulation which is about five years old when chat GPT came out it allowed you know countries like Italy to say you're gonna put that in our market right now let us take a look at it you know puts a little friction a little speed bump on things so you know I think there's a it's a it's a very kind of dynamic ecosystem and I think for scholars it's really interesting but I think for policymakers because there's no you can't in the United States you can't hope for legislation I mean you do hope we all we hope and pray indeed for legislation but the the prospects are not always great that you that you have to come up with other with policy innovation other ways of thinking about how to advance good uses of these tools that we've created and you identified a silver lining right which is the ability to then iterate yes I mean that that that congestion at the capital creates all of this other kind of it's not just experimentation it's also iteration yeah yeah so back to the toiling and obscurity you know yes in the Ford school and in policy circles more generally there is more conversation about tech and interest in tech but this is a unit a school that is very is a is storied in its attention to social policy you have a lot of students who are really dedicated to questions around social policy and poverty policy and deeply dedicated to addressing social inequality and injustice and I'm wondering to them not necessarily to those who are already you know see why it's so important to engage with questions of science and technology but to those students you know what should they be thinking about when it comes to AI how should they be preparing themselves as they you know prepare to go out into the world to work you know many of our students for example go and work at the GAO or they work you know in in bureaucracies or even sometimes in think tanks or as consultants or you know places like the Urban Institute or something like that like increasingly they're gonna have to think about these sorts of things and I'm wondering if you have guidance for them sure I mean the first thing I would say and this goes back to you at the first question again like it's not magic I mean and I think corollary to that is that no one working in the policy space can afford to not reckon with science and technology policy there's not a single policy issue in this moment family poverty I mean you know like because of you know the sort of sense that technology can fix everything right which is wrong you know we already see technology for good and for not in spaces dealing with social benefits dealing with you know family law there's recidivism algorithms that you know do so-called prediction like you know the algorithmic pieces are already over all kind of social bureaucracies and so you're already kind of dealing with them and living with them every day and so I would say you know don't allow yourself to be scared out of the full policy space so if you're working on poverty or you know on unhoused populations or on issues with the disability community like for you to do good policy in those spaces you have got to get a piece of this not you've got to just you know understand it enough you don't have to be an expert but you need to understand where it's really cross-cutting with these issues because it's a lot of decision vectors are being made particularly using forms of technology in these spaces so I would say all you know I'd say to folks working and you know not S&T policy spaces certainly you all aspire to be good and effective policymakers and I think that aspiration I would hope would call you to think just a little bit more about the science and technology space and be and feel entitled to you know ask questions and probe on how science and technology is being used in those places so I'm going to turn it over to you guys in a second for audience questions so for those of you who are you know sort of thinking about questions be sure to use those QR codes and and send them in but I want to take this opportunity to ask what kind of advice you have for women of color generally black women perhaps in particular who are who are interested in working in this space you know generally science and technology policy but more specifically perhaps AI and critical work and AI you know how to how do you deal with the you talked a little bit about yeah you know the sort of assumptions that you can't possibly know but I but I'm wondering what kinds of advice you might have for those so I first want to give you an image and I if I had had a slide this maybe would have been one of my slides from last summer it was the cover of Rolling Stone magazine it was a piece by I think her name is Lorena O'Neill was that the author and it was I think five or six women of color right it was a story about AI policy right it was a few nobles it was my colleague Sarita who teaches at London schools named the last surname I'm forgetting yes yes it was Ruman Chattery Joie Bollimini Timnick Ebru I mean so to the extent that we have powerful analytics about AI and power about about the limits of explainability about the limits of alignment about the way that these tools even when they work well discriminate can be biased can actually have very harmful outcomes that is the work of black women and brown women and so I would say you know they have really been the like forward-leaning visionaries in this space and so I would say to women of color who want to work in this space like this is your legacy this is your baton to take up and move forward with it's it's actually in a quite an extraordinary moment and for you know someone who's slightly older than the women in that photograph that I hope you'll look up and see I mean I think it's incredibly inspiring you know that there are now not enough and also not without cost I mean you probably know the story of Timnick Ebru who was was effectively run out of Google for cautioning about the crisis that we now face with generative AI you know like for being the one who was brave enough to say you know these tools aren't don't work they're not great they're dangerous we should look at them we should take our time you know and and now you know she was she was a prophet so I would say you know also the other students don't be scared out of the space you know this is your legacy your intellectual legacy your doer your maker your thinker legacy thank you all right I'll turn it over to you guys thank you so much for speaking with us today so someone sent in a question giving your extensive work on the social dynamics of technology and biomedicine how do you approach the challenge of making complex scientific issues accessible and actionable both for policymakers and the general public kind of speaking to what you spoke to about like public engagement and how important that is well I think the other part of that's important I think I think of that I brought to the policy space is that I'm a teacher and so what we do as teachers is teach complicated things to people for the very first time you know you have to and so I very much believe in the ability for complicated things to be translated in ways that make sense to people or to be sequenced in ways or explained in ways and I think that's a fundamentally different kind of ethical orientation to policymaking then I'm going to give you this like schematic diagram cybernetic thing and you just need to like understand it and I'm going to tell you what the policy is right I feel both ethically and vocationally and I think a lot of the people I brought to OSTP to work on my team people like Cyril Friedler who teaches a computer scientist who teaches at a college right which is a different kind of orientation to can teaching computer science and has really dedicated her career to expanding you know women's participation and computer science for example so I think that I think being a sort of teacher orientation helps quite a lot with that and I think the other ethical orientation is just that we owe it to people to be able to as policy makers to be able to explain things that matter to their lives or about decisions that powerful people are making in Washington that impact their lives in a way that's clear. So you touched on this in the lunch session but I was wondering if you could talk more about how to ensure that policy innovation does not hamper technological innovation like how do we reframe that conversation and kind of communicate that it's not a trade-off. Yeah was that the question you asked at lunch? Someone else asked it yeah it was a great question matter of lunch. Well you know the first thing I will say is that I just like call that out as a red herring like I mean it you know to allow you know there's no reason that we can't have innovation that's responsible that's equitable and when we are presented with a choice architecture that says that it's a zero-sum game that you get one or the other that's actually just wrong and you know I shared at lunch the something a colleague often says to me about when we're presented with this sort of you know red herring that's like you can have innovation or you know or you can you know things can be more equitable but they won't be good or things can be more innovative but they're gonna be a little risky I mean that just does not the calculation that we have to have and the example that my friend Lawrence gave me that I use often is about Steph Curry in the three-point line right so you can think about the three-point line as like a guardrail right you know that enables all sorts of innovation I mean you know like that's incredible so you know it's not getting in his way it's really enabling the sort of you know the best of what we can do so I just reject outright that that kind of that kind of sort of polar that binary framing so kind of as we discussed a little bit with like risk assessment and how that looks in the US compared to other countries someone was kind of inquiring about the difference between risks and harms because they're the terms are used interchangeably and they mentioned like the EU regular regulatory model that you both were discussing but they wanted to know do risk and harms mean the same thing is there any value in distinguishing between the two in a way that would make a real difference in how we regulate risks to how we regulate harms and is there any like just I guess difference whether in wording or like the actionable outcomes of those two things so I think to a lot of ways to answer that question it's an interesting question I think to people who are experiencing risk or harm I think you know it's a sort of pedantic issue with regards to what it means certainly in and like US sort of tort law you know demonstrating harms are important for you know sort of liability and these sorts of things I think it really depends on the sort of way in which you're using them I would say that people putting aside the EU piece which we can come back to that we need to be able through AI governance to mitigate both risks and harms and so some of the risk conversation you can think about is being more upstream so what are the kind of foreseeable risks that we might be able to anticipate that we can think about trying to do something about and you know harms might be downstream so bad product is released or you know and unintended consequences happen with a product is one way you might think temporarily about the difference between them you know the sort of EU and you might want to weigh in here because you're more of an EU expert than I the EU AI Act has talked about as being sort of a risk-based kind of framework for AI governance because it's quite interested in particular use cases and what are the risks of particular uses and maybe there are use cases for AI that don't you know have harbour much risk at all and that you know we shouldn't be a light touch to no touch on those versus those that might be more risky so you can think about kind of a spectrum of risk although you know the the the question of you know risky to whom or you know might you know the sort of decision making is hard around what's risky or not risky unless you're talking about kind of the far you know the far extremes of things like being served a wrong you know movie on a Netflix recommend your algorithm or something and you know some sort of major national security risk but there's lots of things in between that I think are can be quite subjective and challenging I think for governance so kind of bringing us back to the university context or you know schools in general how do you think AI should be regulated in the classroom do you think it should be up to the professor or a teacher to decide or should there be some sort of standard policy campus or school-wide ooh I don't think you can say you're off the record no I'm not off the record I love this question because it's such a live question so the first thing I will say I've already said this publicly many times including on podcasts and things I so wish that you know open AI had taken another month and before they released chat GPT and spent that month working with teachers and that chat GPT had come out with use guides and discussion guides for teachers so we didn't have to begin the release of a powerful to new technology in a panic right so all of the headlines about school districts banning it and all the students are going to cheat and all of that I mean it just felt like such unnecessary social churn and that there probably could have been you know a bit more work done around that you know I think they probably say that's not our job or whatever we make our products we ship it but that's in my dream world I would have loved the product to have come out with some advice about precisely the like set of questions that you've just raised deal so I don't think I think we don't know I mean I think there's so much interesting policy to be made at University level and even in the K to 12 one thing we do know is that students are going to come up with like interesting use cases for large language models and chat bots that we can't the teachers can't imagine right in part because they're I mean you know this generation is not even digital native it's like digital digital native it's like the third generation digital native and so you know there is just a kind of acuity with these tools that I think teachers are not going to have so I think we need to create a space for that to happen like what like I don't you know I think I talk often about like you know toddlers who don't yet have speech but can like zip through an iPad and like navigate videos and you know like there's just a whole other kind of relationship with technology right now that I think we need to sort of make space for I certainly think we have the concerns that we have already had in ed tech particularly in the K to 12 space around students privacy remain a significant concern especially when you have if institutions are going to use the free versions of some of these tools in which the data that's put into them becomes part of the training data and I think young people are you know we know from actually polling data survey data young people have different conceptions of privacy and so I think would be worry less about you know putting their address you're putting personal information about their parents or these sorts of things and that's not to say that it's gonna come out of the other end of the training data and you know reveal something but we don't know and you know one wants to be I think extra careful with with young people so we've got we don't know enough about that the Department of Education I know is doing some work on principles around you know revising some of the principles they already have around ed tech to anticipate sort of what might happen in Gen AI I think you know it's an interesting moment for universities and for organizations of higher education and of education in higher education because AI is a lot of different things at once right so it's infrastructure for the organization it's like how you you know it's actually like hardware software that kind of stuff it is a topic of inquiry for researchers who work on policy and other things it's being used as an engine for like discovery and scientific labs so it has all of these kind of roles in higher education simultaneously the other the last thing I would say is that we should learn from the past and from our mistakes with tech solutionism and education in the past so we've got you know I was talking to your colleague our colleague your colleague you said earlier about ICT you know ICT is gonna fix all of you know education equality we're gonna put a computer in every room and we're no longer gonna have you know for every child a Chrome book for every child and we're no longer gonna have education equality I hope that in much the same way that some of the conversations about AI governance are really a continuation and a kind of frustration with our inability to govern social media I hope that in the education space we think gosh you know we still have educational inequality and we did put a computer in every hand of every child in that classroom and so what is the lesson for that around genitive AI as I said at the very beginning the technology alone is not going to do it what do we need to put in place study guides privacy regulations you know what other what are the suite of systems that we need to put in place to achieve what our highest aspirations might be some of us may not have high aspirations for these technologies but to the extent that you do it's not just gonna happen it has to be built and people have to commit time and energy and resources to it and so so that's what universities should be doing like what is the the strat like what are the what are the goods what are the three goods we think are going to come from this and what are we willing to do to make that happen because you know buying an enterprise license to GPT 4.5 is not going to do it so based on your work on Afro free tourism and kind of more related to I guess like science fiction arts and just like cultural norms more broadly what role does culture play in shaping not only AI but just technology policy more broadly oh and so many good questions this is a great one I mean our whole imagination about AI comes out of the the imaginary it comes out of science fiction right are you know are we gonna have the is AI gonna be more like terminator or is it gonna be more like her you know like what like what are we're our imagination so we are so imprinted with science fiction and how we think about new technologies no matter you know you know in any new kind of cycle of technology so it's very much a part I mean you know it's human culture but science fiction has this huge imprint and how we think about what technology might do I mean it's not it's not for not that a lot of the certainly over the last year the conversations about AI have sort of referenced you know 2001 Space Odyssey and how I mean we go back to these kind of reference points and science fiction and cinema and in the books themselves so that's had that very shape and force obviously we're at a kind of crisis tension around the use of AI tools and systems and creative works how they're able to make creative works where that data comes from whose labor goes into the ability for you know Sora to make a film or you know mid-journey to create six-fingered hand or so you know I think there's also a moment of kind of tension and contestation around the arts that we're sitting in as well that will be perhaps as transformative as those of us who are you know around my age will remember sort of Napster with with music you know there might be kind of new models new compensation models there needs to definitely be if not explainability accountability and transparency I think about where how tools are these systems are built and on whose labor they're built with and a real conversation about what's fair and not fair so arts and culture are sort of you know across the spectrum of I think what's happening with the AI turn so while we need AI specific regulations and privacy laws we also have many existing civil rights productions and powerful agencies in place what myths should be dismantled around this and what kind of relationships exist between these two frameworks the two frameworks being kind of like civil rights the regulations that we already have and like new AI regulations so civil rights regulations are AI regulations so there's not there's not sort of two sets of things that was part of the theory of the case of the AI Bill of Rights you know the FTC chair Lena Khan has been wonderfully kind of frank and articulate about this you know that like you know if you know a trade practice is discriminatory it's discriminatory whether or not it in you know AI is engaged in it so I think I've been very encouraged and proud to see the Biden Harris administration I think taking an approach that says realistic housing discrimination is discrimination whether or not it uses AI you know if you're violating the ADA the Americans with discriminative disabilities act using AI you're still violating the act and so this goes to I think my the earlier point I was trying to make about how important it is to demystify and disenchant these technologies because we get under the impression sometimes that if you have a new technology cycle that you need a new social compact like that we you know the laws and those sort of rules and the norms and the values we've had can't possibly apply to this like really shiny amazing new chat bot like but of course they can and I think one of our it's been really important I think in policy circles to really anchor in those fundamental kind of rights and responsibilities with regards to to AI and not think that we've got to wait to create whole new systems and whole new agencies you know to the earlier question to deal with both risks and harms that we already facing and that to some extent but not completely and imperfectly we have levers and tools to deal with so you've written about the concept of social repair in the context of genetic testing among African-Americans how can the principles of social repair inform broader policy strategies in health and technology I don't know that's an interesting question there shall repair I mean the so the the that's referencing my my book the social life of DNA which looks at different use cases use cases for direct to consumer genetic ancestry testing I don't know that I don't know maybe a different way to put it might be could you imagine an AI for reparations sure there a way to think about AI in that way in the way that genetic testing you know ancestral genetic testing kind of gets used in your case hmm well somebody's gonna build it certainly whether or not it works I think it's a whole other question that right exactly it will be in the GPT store soon if it's not already there I don't know I mean I think that there are still some fundamental any quality questions around the you know in tech in the tech stack supply chain issues with regards to AI development that you know I think fundamentally vex it the vex AI tools and systems as as tools for repair or for for social justice you know beginning with you know the sort of critical mineral mining sort of often and the global South that has to take place the the use of underpaid labor to do fine tunings of models of the AI models for example the environmental justice dimensions the extreme amounts of energy and of water it takes to cool data centers to train models so there are a lot of of trade-offs that that I think might not make you know AI a great vehicle for repair but you know I'm you know maybe somebody hope somebody will prove me wrong I think you're getting at something that is something that I often I'm often in conversations about kind of desires to see technology as neutral yes that then enable us to imagine these kinds of technologies and I think that what what you just did was to bring out the ways in which technology you know certain technologies are always going to have certain kinds of values embedded in that beyond what you just said of course and some of those things are always going to be embedded in that and we have and we don't usually surface them because we tend to think about technology as neutral right yeah for sure and then can you speak to the you know general reception you've seen of the AI Bill of Rights and other AI related policies and regulations by technology companies and other actors in this space sure I mean I think it's an interesting in the market it's an interesting moment and because it's you know companies both don't want to be regulated but also need enough structure in a market so they can do their work right so you know I think there were last summer obviously you know hearings in Capitol Hill where they have a lot of tech executives and you know many of them saying please regulate us and you know people would be saying they don't really mean it and I think the answer is more they do mean it and they don't mean it right I mean that you that you don't you know people who work in finance you know finance is one of the most sort of highly regulated of any of our kind of societal systems but people you know people working in the system it keeps it fair for you know fair relatively for people working in the system to have forms of sometimes radical transparency around financial information with companies that you don't have in other other kinds of markets and so you know I think that there's a way in which the in the same way that you know the the sort of the red herring of sort of you know innovation or sort of good products regulation you know happens that there is a fundamentally enabling thing that happens when you regulate markets and it gives people you know folks in DC often say the same rules of the road for example so you you know what you're the level at which you're competing or who you're competing with or what sort of what you're competing about so I think yeah we'll just leave it at that I think we're at time oh yeah okay sorry that's okay I wasn't sure all right well wonderful thank you so much Alondra this has been a fantastic conversation and I'm so glad that you could join us at the dean's symposium thank you so much and I think we have a reception outside right so please join us outside for a little reception