 Welcome all to Higher Education and Generative AI Emerging Lessons from the Field. My name is Andreen Zoli and I'm the Director of the Public Interest Technology Program here at New America. We're curious to know what brings you here today. So please take a moment to respond to the poll on your screen. You are welcome to submit questions throughout today's session. We'll pass them onto our panel during the Q&A section at the end. I hardly need to say it, but I will. Generative AI is all the rage. It's in the headlines, it's in our conversations, our classrooms, our workplaces, and in our worries and possibly maybe our dreams about the future. And for those of us in nonprofits, it's on the minds of our donors. We have a fantastic panel today to help us find our footing in this moment. Our moderator, Renee Cummings, teaches at the University of Virginia School of Data Science and has counseled many organizations from local school districts to the World Economic Forum about how to understand and respond to artificial intelligence. Our first panelist is Meredith Broussard, a leading AI researcher and data journalist who teaches at NYU. Her newest book, More Than a Glitch, takes a deep dive into the myths and assumptions that lead to tech-enabled bias. Our second panelist, Todd Richmond, leads the Tech and Narrative Lab at PartyRAN. He invites us to imagine how we can transform our teaching and learning methodologies to suit the interdisciplinary nature of how we work, learn, and live our lives. And last but not least, Vanessa Parley is Research Director for Stanford's Institute of Human-Centered AI, where she leads an interdisciplinary effort to map emerging trends in AI development and foster holistic understanding of this emerging technology. If there's one thing I hope you take away from today's webinar, it's this. The future of AI is not settled. We each have a role to play in shaping this technology and how it is used. Many of us feel scared, confused, and overwhelmed about the big changes, current and possibly future that generative AI promises to usher in. But through community and collaboration, we face these challenges and develop, we can face these challenges and develop frameworks, guardrails, and hopefully resources to respond in ways that promote dignity, justice, and autonomy for all of us. Public interest technology, or PIT, has a lot to offer in this moment. PIT is a diverse set of practices to design, deploy, and govern technology in ways that benefit the public. And again, by public, I mean us as a community. PIT takes many forms. To give a few examples from our network of 63 universities within the Public Interest Technology University Network that is housed at New America, PIT can look like training programs designed for students who come from communities typically shut out of the tech field who are looking to make an impact directly where they live. It can also look like partnerships between universities and the local community to develop technologies that improve access to food, housing, transportation, or health care. It can also look like cybersecurity clinics where students educate and support local businesses and nonprofits. And it can even look like immersive art projects that help us imagine technological futures while connecting us to our shared humanity. There are many more examples of what PIT can look like and I encourage you to learn more through our website for PITUN, which is pitcases.org. Our May PIT universe newsletter will be curating resources and thought leadership on AI from across our network. So if you'd like to learn more, be sure to subscribe to that as well. With that, I'll hand it over to our moderator, Renee Cummings. Thank you so much, Andreen, and welcome to all of these amazing panelists and welcome to everyone who's participating. So to ban or not to ban, to fair or not to fair, these are some of the big questions spiraling the onslaught of generative AI on our sensibility in society. How do we harness the rewards? How do we mitigate the risks? How do we uphold our rights? And what are the responsibilities for individuals who are developing and deploying this technology? Some very, very big questions. And I think we have an amazing panel to get into it and I'm going to start with Meredith. So Meredith, you have schooled us on artificial and intelligence. Now you're telling us it's more than a glitch. You are an expert in the socio-historical impact of technology and society. And you're also an expert on how bias and discrimination are baked into these technologies. How should we interact with generative AI? How should society respond? How should individuals respond? How should the education system respond? Well, Renee, that's a great question. Thank you. I think that I would start with the idea that in the classroom, we need to clarify for our students what AI is and isn't. That's where all of our conversations need to start. So AI is math. It's very complicated, beautiful math, but it is just that, it is only math. AI is not Hollywood. AI is not fantasies of artificial general intelligence. What we have that's real is we have mathematical systems where we take in data, the computer makes a model of the mathematical patterns in the data, and then you can use that model to make new decisions. That's it. It's complicated, but it's also not complicated, right? So we need to start with a shared understanding of the real and what's imaginary. I think we also need to think about the current moment as part of a larger history of technology. This is just one chapter. We're in a hype cycle right now around AI, especially around generative AI, and people are saying things like, oh, it's gonna change everything. It's going to make new jobs. It's going to eliminate jobs. It's going to change education forever. It's going to change a couple of things. Yes, but this is not the invention of fire. It's just another AI program, right? So I think that when you approach it that way, it allows you to chill out a little bit, to feel less terrified or intimidated by the current moment in generative AI. I think it's also really important to understand that the hype cycle does not have room for ambiguity and ambiguity and bias and social problems are absolutely everywhere when it comes to AI. So social problems like structural discrimination, racism, sexism, ableism, all of these problems are reflected in the data that we're using to train AI systems. So the mathematical patterns in the data show discrimination. They show bias. So when you make a system that just replicates decisions that have been made in the past, you're making a system that reproduces inequality and it crystallizes it in the code, making it very hard to see and hard to eliminate, which is why we need more investment in the work of algorithmic accountability reporters because that's the kind of journalism that is holding algorithms and their makers accountable. In terms of what people should be thinking about for syllabi, there are some really terrific critical technology resources out there. I do happen to have written two books on the subject. I think those make great classroom resources but the first place that I always send people is to the Center for Critical Race and Digital Studies to the syllabus there that has an incredibly thorough list of some amazing thinkers who are doing work in critical technology studies. How much murdered time? The next question is for you. I mean, this is your space. You need to study emerging technology in learning and teaching. What do you see as the greatest opportunities for generative AI and what do you see as the greatest threats when we think about the classroom and when we think about across the education system? Well, thanks. First off, thanks for inviting me and it's a pleasure to be here. And if you can pull up the slides, we'll start from there. I'm also heartened to see the results from the poll that two thirds are interested in ethics and philosophy. That's what brought them here. And I'm going to touch on that. So Tuesday of this week, I gave a talk to our school staff and some of our faculty and for those that don't know, Party Rand, we're a graduate only program. Our degree is in public policy analysis. It's highly quantitative and it's mostly a small PhD program. We have some master students and it's housed within the Rand Corporation. So I've recycled some of the slides for this and made some new ones, but I couldn't really come up with one single title, but Dr. Strangelove is a classic speaking of Hollywood and the Bland Corporation is mentioned in the film. So it is near and dear to Randite's heart. So my first title when they asked me was about basically paraphrasing one of the title of Dr. Strangelove. Next slide. But when you go a little bit deeper into things, really generative AI broaches the question of cheating. And I'm a big fan of a conceptual framework looking at what, how and why, because I think those are levels of understanding. Next slide. I think we're also going to get into questions of how our decisions made and what are involved in decision making. Next slide. And finally, I think that this is probably the most important and I think this is where the ethics and philosophy come in. I take a little bit different viewpoint. I think this is a sea change, just like the internet was a sea change for humanity. And I think it's time for a fundamental rethink of what constitutes a human endeavor. Next slide. So if we think that education is supposed to prepare our students for the future, that kind of means that we need to understand what the future is going to look like and what metrics for success are going to look like. So how do we prepare them for a world? And the reason I dropped down to one year is six months ago, we were not having these conversations about generative AI or at least not to the level that we are having them now. You can attribute some of that to the hype cycle, but you can also attribute it to the fact that the rate of change is astronomical. Next slide. So I want to just very quickly say, let's think small and let's think big. And small is what are the problems I have to solve right now in the classroom? And then big are what are some of the bigger questions that we are gonna need to grapple with individually, collectively as an academic enterprise and collectively as humanity. Next slide. So back in January, we did a quick and dirty task analysis. I sat down with three of my students and we said, okay, let's come up with all of the tasks or at least a good collection of tasks that grad students do before they enter our program during the program and then when they leave. And so we did this in a two-hour session and we had a very simple rubric. Yes, means that it's a 95% solution using chat GPT to accomplish the task. Maybe is it doesn't really do it now but we think that these features are gonna be developed shortly and no means that it doesn't really work very well at all. Next slide. And so our scorecard was that 97% of the tasks that our graduate students do have can have some generative AI aspect. OJT is on the job training. All of our grad students are required to work on RAND research projects. They have to do at least 300 days of project work and 70% have significant impact. Anecdote on Monday of this week, one of my students said, for one of my other classes, we were assigned five readings. They were very dense academic articles from the 50s and 60s. I read one, found it painful. I asked chat GPT to summarize it, read that. And it was, since I read the paper, it was good enough. So they did not read the four other papers. They just read the chat GPT summarization. Next slide. So when we took this and then we tried to then, okay, let's think more broadly about the education experience and the classroom experience. And I've been working in emerging tech now for almost three decades in education. In the early 2000s, we were doing a lot of work around asynchronous and synchronous and remote education, having back channels and things like that. The COVID forced those discussions to be had in more earnest. And I think that Generative AI is gonna, it's gonna accelerate those conversations. Going back to the what, how, why, think about mechanical, operational and conceptual and what we're teaching and why we're teaching those things. Chat GPT is pretty darn good at mechanical and operational things. It can do calculus. The conceptual is a weakness, although I have another anecdote maybe for later to talk about how that's problematic. Next slide. So our guidelines are, put something in the syllabus about Chat GPT because the students are going to use it. What we are asking for is citations, if they use it to generate pros that they cite what their prompt was. They may, we may, sometimes we would ask them to cite what the response was from Chat GPT, but then your citations are gonna be longer than the actual paper. But we want them at least for now to treat Chat GPT as another source. A quagmire or a vastly, it's a food processor. I described Generative AI as a food processor that has access to every vegetable. You ask it to make pico de gallo. It'll make pico de gallo, but you don't know whose recipe it used and it doesn't know what pico de gallo tastes like, but it knows what other people have described it. And then the third is to really think about maker and critique-based classes. Next slide. And the maker and critique space is where we move actually towards the big questions and I'll step through those very quickly. Next slide. So this is where we get into big picture and rethinking what the classroom is like. So I think that most of us would say that at least for higher ed, these are three things that are really important to us and that we're trying to teach our students to do. Next slide. So the question is, do our existing models work for that? And given what chat GPT can do, here are some of the big questions that I think we need to grapple with. Do students need to know higher math? My econ colleague would argue, yes, they need to know calculus. But again, we're back to this, is that a mechanical skill so that they're achieving the conceptual skill when there is another way to do it. You can go back to the arguments around calculators in the classroom to get some inkling, but this is a much bigger fish to fry. And then more broadly, do students need to learn mechanical skills to achieve conceptual skills? My formal training is in music and chemistry and chemistry fought this battle, organic chemistry for decades. It was a memorization exercise, so mechanical skills. More recently, it has moved towards teaching it mechanistically and then conceptually instead of just a bunch of mechanical skills. And then finally is writing and by extension reading going away? And I think that's a question that we have to ask, especially given the fact that I have very smart grad students who are very skilled who are using chat GPT to do a lot of their reading. Next slide. So are humans gonna be in the loop for decisions going forward? This is a big topic in the military for shoot, no shoot and autonomous weapons. But we are seeing this with autonomous vehicles and autonomous insert your thing here. Next slide. We know that technology changes jobs. We know that certain jobs are lost, other jobs are gained. Right now there's a big discrepancy in what people think those ratios are going to look like for generative AI and other capabilities. Next slide. And this I think is the really important philosophical question, what are human endeavors? I remember about a month or two ago I read an article in the Atlantic and the author was saying they had no interest in seeing a movie that was written by generative AI. That they, that that was a human that films were a human endeavor and a human experience. And they wanted to keep that. That said, we have seen that as technologies come in if they are compelling and convenient they will replace what came before it. So this is an opportunity to really rethink deeply about what, how and why humans do the things that they do. Next slide. And finally, I think the postscript is gee maybe the humanities actually are important after all. So I will throw it back, back to the group. Thank you very much for that, Todd. Very insightful, very comprehensive. So now it's Vanessa and Vanessa we know that you are committed to human centered artificial intelligence. And we also know that you facilitate interdisciplinary thinking in AI. And of course you're behind or working with a team that publishes the annual AI index. Tell us a little more about the AI index the methodology to putting that together. And if in any way chat GPT tools and generative AI figure into the index this year. Sure. So I figured I'll start with there's a little bit of background about HAI Stanford Institute for Human Centered Artificial Intelligence. And then I do have a few slides to highlight some of the visuals we do have in the index. So first HAI was established in 2019 with the goal to foster interdisciplinary AI research education and policy programming that improves the human condition. So we believe that interdisciplinary collaboration is essential in ensuring these technologies benefit all of us. And that interdisciplinary mindset is reflected in our faculty leadership that comes from medicine, science, engineering, humanities, social sciences and it is reflected in all of our programming where a requirement is kind of that interdisciplinary collaboration across multiple schools within Stanford and outside. You can go to the next slide. So the AI index is an annual report that's housed within HAI but it's made up of a steering committee of experts in academia, industry, government. Some are very technical in their background. Some are policy focused. We have economists, philosophers, et cetera. And the index tracks, collates and visualizes data related to artificial intelligence across multiple areas. We have education chapter. We have research and development chapter, economy chapter and many more. I encourage you all to kind of take a look. There's something for everybody. And I, next slide please. And before going on, I did want to give a quick shout out to our data partners without them the report would not be possible. They collaborate and contribute data and analysis across the entire report. The report is 300 pages. So I'll go into like just a very brief, a few charts and graphs. But again, hopefully it just piques your interest to read more. Next slide, please. So this year, the generative AI broke into public consciousness. We've mentioned it already in this session. You know, it's in your classrooms. It's at your dinner table. It's at your barbecues. This was the year that the most releases so far of large generative AI models and they're applied to a variety of tasks, language, image, programming. Some are private, some are open sourced from the US, some from China, some from Europe. Next slide, please. And while the slide is changing, these AI tools, they're not new. AI has been around for quite some time. We collected data from Epoch AI who has created a database of machine learning systems going back to the 1950s. You can see from this chart that the size of those systems has been growing exponentially over time and has been shifting from academia, the blue dots, to industry, the purple dots. And then for those interested in AI history, Samuel's checker player, the best known early machine learning system is all the way over there at the bottom left. Next slide, please. And the majority of these systems are developed in the US, Canada, the EU and China. At HAI, we kind of talk about this point a lot, what this might mean for the rest of the world. So these systems are developed in certain areas of the globe. Values, cultures, norms are embedded within those technologies and then they're distributed across the world where not everyone has the same culture norms, et cetera. So are we exporting certain values? Do we want to be doing that? How can we build these systems so that perhaps norms can be adjusted or modified depending on kind of where you are in the world? And that is one of the many reasons we do believe we need diverse perspectives participating in all phases of development of these technologies. Next slide, please. So part of the index, we also synthesized information from the Computing Research Association's annual survey on the state of computer science and AI in post-secondary education. This is just North America, but CS PhD students are gradually becoming more diverse. Next slide. And then undergraduate students even more so. However, I would personally say that this is not good enough and we still have a long ways to go. Next slide, please. And the portion of new women in AI PhD has remained at around that 20% mark. Next slide. And women are making up an increasingly greater portion of CSCE faculty. But again, I personally don't think this is good enough. Next slide, please. And then back to the comment I made earlier about how most of these systems are coming from industry. We see a similar trend with the new CS PhD students. Increasingly they're headed to industry and less so to academia and even less so to government. Next slide, please. And then lastly, I wanted to briefly touch on the policy chapter. So the AI index survey legislation concerning AI passed in 127 countries. 31 countries have passed at least one AI related bill and 37 in the past or just in 2022, nine of which were in the US. Next slide, please. And policymakers around the world, we're hearing it more and more are developing national AI strategies and more and more and more countries are added to this list every year. All right, and that is all that I have at the moment. Thank you all. Thank you so much for that. Thank you so much for that Vanessa, my screen was dancing there just a little bit. So I'm going to start the discussion. So we know the big questions have been around, should we pause this technology? There is a fraternity that believes that we should. Most recently there have been calls for a global body to regulate the generative AI and there are also calls for a more ethical approach to this technology. My question to any of you would be, what should an ethical approach to generative AI look like and what should governance of this technology look like as well? So anyone can just jump in. Well, I can kick us off. I was really struck by a story that ran in the Washington Post yesterday about the training sets, or the training data used to create generative AI like chat GBT or the Google version of chat GBT. And so I think one of the things that gets lost in the hype around generative AI is the incredibly toxic nature of the training data used to create these systems. So the way that these systems are created is the same way that all other machine learning systems are created, you take a whole bunch of data. And in this case, it's data scraped from the open web. The open web has a lot of really wonderful stuff and a lot of really toxic stuff. And all of that indiscriminately scraped data gets fed into the computer, a computer makes model, et cetera. So that's how generative AI works. So this analysis by the Washington Post looks at what specifically are the sites that make up the common crawl, which is the data set that is used to feed chat GBT and again the Google competitor and really all the other generative AI systems because they're all drinking from the same well. There aren't that many places to get massive data sets and everybody's pretty much using the same stuff. So these generative AI systems are being fed with 4chan, with data scraped from 4chan. They're being scraped with data, or they're being fed with data scraped from Stormfront. There's a lot of hate speech in the training data. There is stuff like voter files, right? So you really need to be careful about trusting generative AI systems. Todd, your graduate students who are using chat GBT to shortcut their readings, I am sure that they feel like they're being really creative but they are also doing themselves to service because there's absolutely no guarantee that what chat GBT is putting out has any validity or has any basis in reality. It's given to hallucinations. So I think that ethical use of generative AI starts with emphasizing that this is just a tool. It has zero foundation in truth. It is given to hallucinations and even though it looks like it's working, it's not necessarily working, right? So I wouldn't trust it entirely. So I think entirely is the keyword there. So I don't think that everything that chat GBT spits out is nonsense. So because you can fact check it, you can't trace back to see where the original sources were but I think it's a little from column A and a little from column B. And like I said, the graduate student did a sanity check on it and found that actually there was that it was a pretty accurate distillation of the original source material. With regards to the where it's scraping, there's a very good article on the Verge that came out today about AI Drake. AI Drake, there was a viral new song from Drake that dropped and it turns out it was not done by Drake, it was done by generative algorithms. But the piece in the Verge goes on to talk about intellectual property and the whole concept of fair use. And there are almost all of these generative AI companies are making the same claims that the search engine companies made which is, well, that stuff on the internet is out there, it's free to use for anything we wanna use it for. And for search engines, it was one thing to index it, it's a completely different thing to use that information to build an algorithm which will now essentially create derivative work of the original work. And that's much more problematic. And of course, Getty Images sued stability diffusion for billions of dollars, we'll see where that goes. You're gonna see the same thing in the music industry. It's bringing the section 230 argument back to the four about what is regulation gonna look like. With regards to the ethics, about a year and a half ago, we started an effort called EIEX and it stands for equitable interfaces, ethical experiences. And it's a play on words to UIUX, but it's our belief that technology creates policy de facto and that the people who design the technology are creating policy, especially in a vacuum where government is not doing regulation and creating policy. And UIUX has some elements around accessibility that get at some of the equity and ethical issues. But in our view, that's necessary, but not anywhere near sufficient. So there needs to be an entire new field about how do you create emerging technology that has equity at its core and has ethics baked into it instead of, as has been mentioned, having toxicity and disinformation baked into it. And so we're hoping to grow that. We open source everything that we make and freely give it away. So we would love to find partners in the journey on how to figure out how to do EIEX for emerging tech. Yeah, I wanna add in another kind of aspect of how to do all of this ethically. At HA, we talk about the ethics needs to start from the very beginning, from the development stage, the computer science graduates working on these types of things. They need to have a awareness of ethical concerns. They need to be working with ethicists. At Stanford, there's a program called Embedded Ethics where all the students have to the computer science students in their CS core courses are taught ethics modules in the hopes that that impacts their thinking as they go on and develop these technologies. And then what we also do is for all of our grant funding, the teams need to write an ethic statement before they get anything to start their research, thinking about what could be the ethical implications, the societal implications if this technology were to be ubiquitous. And those statements are reviewed by an interdisciplinary panel of experts. Again, from medicine, philosophy, computer science, et cetera. And a lot of times there is iteration on their research methodology in order to kind of adjust and sometimes even it's decided part of the research maybe should not go forward. But yeah, we don't provide any funding for research until that ethics and society review is complete. Nessa, thank you for that. Let's talk governance just for a little bit. Now, we always talk about these rigorous and robust guardrails that need to be erected to protect our rights and to uphold our civil rights, our human rights, our digital rights. What do you think a strong governance structure for generative AI should look like? One of the things that is important to keep in mind is the context where AI is used. So facial recognition AI, for example, low-risk use of it is using facial recognition to unlock your phone. And a high-risk use of facial recognition AI is something like police using facial recognition on real-time video feeds and using it as part of surveillance because it's going to misidentify people with darker skin more often thereby contributing to harassment and over-policing. So the context is key. One of the things that I would really like to see is I would like to see more algorithmic auditing. Algorithmic auditing is something that we talk a lot about in pit circles. And what it is basically is opening up black boxes, interrogating algorithms to look at where the biases are because there are biases. There are huge flaws in AI. All you have to do is look for them and you'll find them. So I would love to see algorithmic auditing integrated into regular ethics reviews, for example. It would be great to have ongoing monitoring of technology as new iterations are rolled out to monitor the technologies to make sure that not only the technologies are not biased to begin with, but that the bias that is mathematically mediated is then not added back in in future iterations. So that's a kind of technical feature of algorithmic governance that is going to help in implementing high-level policies. DARPA like 15 years ago had a program called explainable AI and they, because they saw this coming and their goal was can you trace back, can you open up the black box and can you start to trace sort of where, how the algorithms are working because when you have algorithms rewriting themselves, the people who set it into motion don't really know what's going on under the hood. We're big fans of red teaming and this is, you know, and I was heartened to see that open AI is doing red teaming on their algorithms. The red teaming is usually done to try and figure out where your vulnerabilities are and commercial companies do it for a competitive advantage. We started arguing a couple of years ago for narrative red teaming. And this is out of the, what prompted it was LA city announced that they were going to release all their 311 data to the public as part of a transparency effort. And I was immediately horrified because the 311 data taken out of context allows you to construct very toxic narratives when you combine it with demographic data, census data and no one was thinking about how that data was going to be weaponized. And so the idea of narrative red teaming is if you're going to release a report or you're going to release data working through how people are potentially going to weaponize that and then prepare messaging in advance and maybe you do inoculation. We have a grad student working on this with Russian troll posts and looking at how you inoculate against disinformation campaigns. I think for the governance piece the thing that I find most challenging is most of us are sitting in the US and we have a very US centric viewpoint of this. This is a global phenomenon and since we end up doing a lot of national security work we worry about adversaries and the problem is that not all of the countries that are developing these technologies have the same moral compass than we have. And so there's an asymmetric governance problem because if we want to pause it which I don't think is realistic or we'll have the desired outcome. Right, the problem is that other people are not going to pause it and now you give them a competitive advantage. So it's a very messy, complex, global problem set when you talk about governance for these emerging techs. So I would say too. So yes to red teaming, yes to algorithmic audits completely agree. There's also this aspect of developing community norms like these technologies are moving so fast our governments don't move so fast. Like what do we as a community of researchers, computer scientists want for this technology? There's the example of CRISPR where once those in the development phase of that technology realized its impact they kind of developed their own community norms which had nothing to do with the federal government. So thinking about kind of when we develop these technologies what is appropriate to release? When is it appropriate to release? What types of documentation should be released with these models, et cetera? One thing I just want to tag on because I totally agree but I want us to be very careful when we say community we need to be cast a wide net for our stakeholders because my wife is an artist she's also faculty at Cal Arts. I've spoken with her students and artists for instance with generative image algorithms they stand to lose a lot in this. So that community of practice needs to not just be the computer scientist and the technical folks but it needs to bring in the arts and the humanities because they are very real stakeholders in this equation. Yes and now we're going to take you want to come in Vanessa because I was going to do some questions. Sure, okay. So our first question is some of the systemic problems precipitated by the internet were unforeseen. What systemic problems do you anticipate if we all use chat GBG for intellectual work? Anyone can take it. All the problems plus some new ones. Yeah, it turns out plagiarism is nothing new and this is sort of a super sonic version of plagiarism and humans do what humans do. So we can look to the past to see how humans behave badly. It's just that the problem is the technology scales in a way digital scales in a way we've never seen before. So the speed at which those problems propagate and the scope of those problems is what's drastically different now. And so I think I agree it's going to be all of the same it's just going to be faster and it's going to be on a bigger scale. See, I don't know if I agree with the idea that it's scaling in a way we've never seen before. Like we've been doing this for 30 years now. Like I've seen it, it scales. Like it's not new anymore. But in terms of plagiarism, you're absolutely right that plagiarism is nothing new and students cheat. Like that is absolutely the case. I've seen some interesting work about how do you inoculate students against cheating with chat to PT. One of the interesting ideas is to have iterative assignments. So do in-class writing and have assignments that build on the work done previously. That's a little bit more challenging. You can make the students do in-class editing. And then another assignment that I've seen a lot is when instructors have the students actually use chat to PT. They say, all right, we're going to feed it with a prompt. And then the students get the results back and then they have to critique the output. And so that's been a really useful exercise for critical thinking around technology. I don't think we can eliminate cheating entirely. I think one of the other reactions or one of the other responses is that we can reevaluate what are we trying to do by asking students to have closed book exams, for example? Like are we requiring them to memorize something for the sake of memorizing it or can we design experiments and exams that are, say, open book or acknowledge that there is generative AI out there and have the students... Oh, another assignment I heard of is where students have to do an outline for a term paper and then have to feed the outline in and have chat, GPT write the rest of the term paper and then also critique it. So here's our next audience question. Part of the fear of hype comes from the CEO of OpenAI, Sam Altman. How do we, the media, obviously someone in the media mitigate the difference between what is accurate and what is high? Talk to somebody who doesn't have an economic dog in the fight. So it's fine to talk to people in the corporate side of things. The folks in the academic sector usually are a little bit more even-handed. So it's kind of, to me, it's basic fact-checking. CEO of the company says this, find somebody who doesn't have a financial stake in that comment and see what they have to say. There's no perfect way to do it because the hype almost always has an element of truth to it. Okay, Vanessa, this question I think is for you. In the AI index, this individual is saying that they were upset to see most AI PhD grads go into industry more than academia and vastly more than government. How will this affect regulation? Yeah, I mean, I think that the big issue here is just kind of that the knowledge is no longer within academia who's going to teach kind of these concepts to the next generation. And then same with government, like if these experts in these technologies are not going into government, how is our government going to know how to appropriately set regulation, how to appropriately think about these types of things? One of the programs we have at HAI through our policy arm is a congressional boot camp where we bring Congress people and their staffers to Stanford and we give them kind of a two-day crash course on AI with a kind of government regulation lens on what types of things should they be thinking about? We also have a fellowship program to try to get some of these PhD students interested in government and civil service and kind of funnel them that way to kind of show them what else is out there besides the, perhaps high salaries that industry can provide. Thank you, Chad and Meredith. Is there a good resource somewhere with sample syllabi that have assessment rubrics, either of you? Not that I'm aware of, I mean, I would Google it. Yeah, or feel free to email me. I have some boilerplate, for instance, from CalArts on what they put in their syllabi about use and non-use, and we've got some other boilerplate that we've done. The problem is it's a moving target because what it's good at doing and what it's not good at doing is changing and it's very context specific. This question is about transparency, anyone can take it. What would need to happen to guarantee transparency with the data sources being fed into AI? So I won't comment on like what needs to be done. I think there's a few different things, but I do wanna point out a resource that the Center for Research on Foundation Models has recently developed called ecosystem graphs where they try to map each of these different generative AI systems, like where is the data coming from? What is the system that it's built upon? So that you can see kind of what's going on and perhaps better identify where some of the bias, et cetera might be. Another thing you can do is you can go to archive and you can read about how these, you can read the academic papers about how something like chat GPT is created. So if you wanna know what's in GPT-3, you can go and read the academic paper and it says in there, okay, this is trained on common crawl and it is for its self-censorship, it's using, what is it? It's like real toxicity prompts is the dataset that it's using to find bad words in the dataset. So I mean, they're not like the information is not super secret. People like to pretend that it's super secret but it's not all that secret. Well, the waiting is secret. The waiting is secret, but most people are not messing around with weights, like most people are kind of interested in, okay, what is this being trained on? And the fact that it's being trained on Reddit data is really helpful to know because you can look at Reddit and you can see, oh, that's a cesspool. Like it's pretty interesting, but it's also a cesspool. So maybe this model or maybe this generative AI is going to spew some filth that I do not agree with. You know, that's like the level of transparency that I think a lot of people would be pretty happy with. Okay, the next audience question is apart from summarizing articles, do students and faculty in your programs use chat GPT in other ways to engage in learning example for running experiments? So Todd, I think that's directed at you. Yeah, so not only are students, but our researchers are using it to write code. It is very good at Python. And so, and one of our AI researchers who is one of the snarkiest and most negative people I know is using it to write code. And so it's kind of like if it jumps over his bar and it's because the code is good and the code works and it comes down to, you know, if it works, if it's good enough, then it will get adopted. Students are using it to do lit reviews, although that's, sometimes it's good at lit reviews and sometimes it's not good. They're using it to do outlining for their dissertation. So to help them think through how they sequence things, some of them are using it to, you can put text in and ask it to ask it what it takes from that. What are the takeaways? And if chat GPT doesn't synthesize your prose, then maybe you didn't write very clearly in the first place. So it's kind of like having another writer to check your work. So, and then we've been also connecting it to agent-based modeling systems. So you can run really interesting experiments with agent-based models that are driven by chat GPT inputs and outputs. So there's a lot of really interesting stuff that can be done and we're just scratching the surface at this point. And our final question is, how do you think AI can help society as opposed to harm society? Well, I'll jump in and then you can tag team, see what I missed. It holds the promise of doing tasks that are boring, rote and not stimulating and giving humans more free time. That said, I have yet to see free time from any technology advancement. It always manages to go in. We're seeing amazing capabilities. I was a biochemist. I did protein structure function. We wished that we could imagine, given a DNA sequence, what a protein structure looked like. AI is solving those. Is it 100% correct? No, but it's basically solved every single structure and it gives us a starting point to where now the humans can come in and do really interesting work that builds upon that. So I think it has the power to do a lot of stuff that we wished we were able to do but didn't have the time or didn't have the patience to do. The challenge will be, is it accurate? Is it equitable? And is it good enough for the humans to then use and build on? And that's an open question. Thank you, Meredith and Vanessa. Any comments? How does AI help instead of harm? We have about one minute before I have to hand it over to Andrew. So I would push back on that notion a little bit. We are, as I said before, about 30 years into the technological era. So we need to add nuance to our assumptions about technology. I wouldn't assume that AI, that there is a thing about AI that is going to be helpful for society. I would not assume that AI is going to be all good or all bad. I would just encourage people to add nuance to your understanding of it. It's not really about binaries anymore. And in general, I'm really optimistic about the field of public interest technology as a way of helping people understand all of the nuances and all of the potential implications of new technologies. Thank you, Vanessa. Yeah, I want to plus one Todd's comment about kind of the AI scientists and protein structure for those of you who aren't kind of aware of how AI is helping kind of the scientific field. It's super interesting and doesn't get as much press as kind of chat GPT. And then also, I agree there's like a lot of promise. There's a lot of, it's like, we don't know what we don't know, but we do really need to think about like how we want to use these tools. What are humans not good at? That maybe the tech is better at and what are humans better at and how do we want to kind of pair up and use these tools to really create really exciting work opportunities, et cetera. And I just don't know if we know what it is yet, but we should be thinking in that way. I think especially since there's a lot of, as Meredith said, hype out there. Panelists, thank you so very much, Andreen back to you. Hello, everyone. Sorry about that. It was a little bit of a snafu on my part. I wanna just thank you all so much for joining us. First, I wanna thank our panelists for joining us today and our fearless moderator Renee for helping us navigate that conversation. And second, I want to also thank the audience for showing up and as a wonderful treat, we will be sending randomly sending two of you a copy of Meredith's book, which we're excited to share. And then third and final point I wanna make is a little bit of a nod to what Meredith talked about at the very end and also Vanessa and Todd in many ways is that this is the work of the public interest technology program is to help our students think about the social and political implications of these tools and how they want to contribute their skills. And so our hope is that we see more PhD folks or other folks in general go into nonprofit, go into government and help other organizations navigate this world. If we're gonna believe that technology's subsuming everything, we obviously are going to need people to help us navigate those waters. So that's the charge of the university network. So I wanna also encourage you all to think about how you might partner with us to help make that pathway more legible for our students. I think the reason why you're seeing industry represented is because industry does a really good job of helping students see what that pathway can be. So I want you all to consider being collaborators and partners with us to help make that path even clearer for other students. And finally, I wanna just let you know that we will be distributing a copy of this session to all of you very within about 48 hours, I believe as usually are charged. And any sort of questions or comments that we were not able to answer today, we really apologize for that, but we'll try to tackle some of them in other ways throughout the rest of the year, because I suspect we'll be talking about many of these things. But some of the resources that you've identified, links will also be shared out in the final summary. So thank you all so much for the time and have a good rest of the day.