 Please welcome to the stage, James Mickens, Larry Lessig, Latanya Sweeney, and DJ Patil. For those that don't know me, I'm DJ Patil. I'm the former US Chief Data Scientist, general partner at Great Point Ventures. And what we have here today is a remarkable panel to talk about where we're going on a number of these critical issues of how we are going to carry this work forward of the internet and how to think about the future fundamentally. Our three panelists, just a quick overview of them. In no particular order here, we have James Mickens, who is faculty director of the Applied Social Media Lab and is on the board of the Berkman Client Center for Internet and Society. He's also associate professor of computer science at Harvard University. Some of the very special things that he focuses on are the performance, security, robustness of large distributed web services, and a lot about how the fundamentals of the internet works. We have Latanya Sweeney, who's been heavily influential on my own work in policy. The Daniel Paul Professor of Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences. One of the things that is really insightful that she brings to the table is she's a former chief technology officer of the US Federal Trade Commission. And she's also, if you've ever thought about HIPAA and you've heard about HIPAA, HIPAA is fundamentally based on the work that Latanya did. Larry Lessig is a Roy L. Furman Professor of Law and Leadership at Harvard School. He's taught at Stanford. He founded the internet and society center there as well and also at the University of Chicago. And particularly, he clerked for Judge Richard Posner at the Seventh Circuit Appeals Court and also Justice Anton Scalia at the Supreme Court. He's the founder of an incredible number of things, including the founder of Equal Citizens and founding board member of Creative Commons. When anytime you see that logo of CC, and he's written in extensively on a number of books, which are some of the must-read books in the space around how to think about the formation of the internet, the contract systems that we adhere to, and I've just been a long-time follower of his work. So maybe to kick things off, James, I wanna start with you and really see this up around the backdrop of what we're seeing happen right now. This next week is the launch of ChatGBT, which is the fastest growth product ever that's happened. We have every student who's applying to college, using this technology. We have every student using it in some different way. Universities are trying to figure out. We have the conflict in the Middle East happening right now that we have to address and the role of social media, vaccine hesitancy. The Surgeon General has weighed in on the issues that are being heard from parents, physicians. We have an election coming up. And so you're, as this leader of the new Institute for Rebooting Social Media, you've got this three-year research initiative to address media's urgent problems, misinformation, disinfo, all these things together. How are you going to bring together the different groups to tackle this problem? That's a long list of problems to solve. I was gonna lean on the president here and ask him for some inspiration, but I'll try to channel him. There are a lot of problems that you just listed there. And I think what's interesting about that list that you just gave out is that it really calls out the fact that technology is incredibly pervasive in our lives and that it's true that technology isn't always the solution to problems, but it is oftentimes adjacent to and involved in what those problems are. And so I think we're actually in a really unique moment in time right now where even though different people may disagree over what the fixes are, technologically speaking, people know that something is not right. I'm both the left and the right and various political spectrums, various personal backgrounds. And so I think that one thing that we wanna do with this new lab that we're creating is we wanna make sure that the solutions that we try to come up with to the extent that they are technical are actually grounded in good science and good engineering that we're actually getting software developers, HCI people, folks like that, statisticians involved in the work that we're doing to make sure that when we find a problem and say, oh, this should be fixed in some way, that the solutions that we're coming up with are actually feasible from a technical perspective. I'm sure that a lot of people on this stage or in the audience have heard someone who's very well-intentioned in the policy sphere kind of akin to what Kasha was saying where they say, oh, there's a problem, let's just build an app. We'll just make a website. We'll just make a large language model and then that should probably fix everything, but it's not that simple. You need to have not only the policy people, the regulators, the social scientists, but also the technologists in the room to make sure that these solutions that we're coming up with are actually technically grounded in solving the actual problems that we want them to solve. Well, one thing that seems that you're doing very unique is you're bringing builders into this process. All the panelists that we had before, they're all builders of these technologies. And so how are you going to convince people to leave their big salary jobs to come into academia to work on this? And then how are you going to focus that to actually create tangible solutions? What does that look like? I plan to recruit mainly using personal charisma and blackmail. That's the top of the kidding, of course, in case any members of law enforcement are out there. A thing that I think is interesting about this moment as I was mentioning before is that there's a lot of feeling out there even amongst engineering people, even amongst developers, that something is not quite right with the way that these big tech companies are operating. And in fact, what these small tech companies are doing. And so I think that even if you'd asked me that question, let's say five years ago, I would have been less optimistic about our ability to bring in really talented folks from industry to work on tech for the public good. But already we've been hearing from a lot of people, both from seasoned technologists and also students, many of whom are in the audience, saying there has to be a better way to build these products. There has to be a better way to think about who technology is serving and how we should build it to sort of center the public good. So I'm actually not that worried about being able to recruit people. The thing that's sort of the biggest challenge to me is figuring out what exactly should we do in a way that is centered on the public good that listens to the needs of real people, including those who have previously been disenfranchised or ignored by technology? How do we figure those things out? And I think if we can answer those questions, then I think the tech talent will be there to help us find the solutions. Well, Latanya, turning to you, you've also been on the regulator side, helping out the FTC. And so I'm curious, especially as a person that's kind of created, I mean, you helped be the lead of this entire field of public interest tech. What do we think that the regulators need in this moment and the efforts that you are really championing here to help on that side from the government of the executive branch for what they need to do? Yeah, I think one of the ironies from the first panel is the manifesto that the guy from Netscape put forward is actually how we've been operating for the last two decades. And that the people who would normally help us, the regulators, the journalists, the civil society organizations have given technology a free pass, have just not been engaged at all. And part of that, when I was at the Federal Trade Commission that really stuck out at me is we had amazing ways of finding deceptive ad practices and monopolies and so forth if you wore a brick and mortar building. But they didn't have any way of taking those efficient and effective techniques and applying them online. And a lot of what I did there was basically building labs and building tools so that they could learn how to do their job better because they could now do it online. As we now look at how much of our lives are online or have this technological component, if you think of everyone who would normally regulate us or regulate those areas or our laws, all of them are currently up for grabs by what technology design allows or doesn't allow. I have a 15-year-old son and when he was younger, he and I got in a big debate about what is free speech, which parents will do, but he went on to talk about what he viewed as America's free speech, but it was Twitter's view of free speech, which was an America's view of free speech. And I was fascinated by that because America's view of free speech really gives space for the underdog for the voice who would otherwise be drowned out to still speak, whereas Twitter's notion of free speech was where the crowd is freed to drown you out and intimidate you even off the line. What I found more disturbing though is when I surveyed students at Harvard, how many of the undergraduates had a view of free speech that was similar to my sons or similar to Twitter's. And it begins to help you understand how if FTC can't enforce price discrimination or the Civil Rights Act or any of these other kinds of laws that we have because they happen online, it makes you understand how it is, how ineffective and how much freedom they've had. And the question is, how do we shore up and let those who would help us how built in technology and mechanisms for them to do their jobs? Riffing off of that, you had a big announcement yesterday and so I want to hold space for that announcement because it dovetails around this concept of not only free speech, but also what is actually happening inside the platforms as they try to figure this out. So could you talk about that? Yeah, yesterday, many people may recall Francis Halgan was the head of Civic Integrity for Facebook and she leaked about 1,000 documents from inside of Facebook, which were collectively known as the Facebook Files. We were able to get a copy of those and we have worked hard over the last year or so to solve all kinds of privacy and security issues and we just made them public yesterday, fbrcive.org, just give a shout out for it. But the reason we made it public and the reason we took on that task is actually it isn't about Facebook, it's about all of these platforms. It's about what's going on behind the scenes or with respect to content moderation. What's going on behind the scenes with disinformation, what they know and don't really talk about outside and you realize that those problems aren't Facebook's problems, they're across the board. We just don't know how to build trust at scale. We just have no idea how to do content moderation. Forget a list or so forth. Which about is the level of technology that they have? You think about Facebook, it's international and in those documents we had 20 different languages that had to be translated and the moderators for those lists don't have any idea what's going on in those countries and rely on Google Translate to tell them whether or not to allow the content or not. I mean the number of issues are huge. I think the goal of releasing those documents in five years if we can provide coherent answers to these issues whether it's new technology policy and just knowledge and insight that in five years can we enjoy the benefits of social media without the harms? Having been the builder of that boring social network that people were referring to before where we send you lots of updates about your LinkedIn connections. It's one of the and being also responsible for trust and safety there. It really resonates with me because of the complexity of what it takes to actually build these things and figure it out and navigate it. And then also not having anybody you can really talk to in fairness. And so Larry, I wanna turn it to you because in the beginning of the internet as we heard it was like it's about discourse. The conversation it was about finding ways to engage was the utopian version of that. And it feels like we've gone through several stages if not stages of grief as we've gone through this. But you've been at the forefront of all of these transformations. And I'm wondering what is the modern version of where we stand today on this discourse and deliberation and what does it need to look like? Yeah, so the conversation today so far has been a lot about how we change the internet. I think this lab is also gonna think about how we begin to change democracy. Because I think there's an urgent need to rethink what we imagine democracy is. Right now democracy for us is a bunch of elections and a bunch of crazed clowns in Congress and that's our conception of what we should be doing. And if that's the conception of democracy, we're toast. I sometimes like to think of it like, so you're the captain of the Titanic, you've just hit the iceberg. You step out onto the deck and you see all the overturned tables and you think, okay, we can fix this. And then your crew comes to you and tells you that there's a gash and there are six of the units that are filling and you realize it doesn't matter whether we fix this. The gashed hall means we're going down. And then you've got to convince people to climb into lifeboats, which in the middle of the Atlantic, in the middle of the winter, in the middle of the night, on the Titanic is not an easy task to do. And so the analogy here is like, I've been for 17 years working on how do we fix our democracy? Like money and politics, gerrymandering. That's the overturned tables. We know what we would need to do to get us maybe for the first time, but get us a representative democracy. But there's a gash in the hall that in some sense means it doesn't really matter because we are under such a threat that even fixing this won't fix the democracy. And the gash is AI broadly conceived. I don't mean chat GBT. I mean, artificial intelligence, which corporations were the most important first artificial intelligence that began to muck up democracy. But then add the first contact with AI, social media, which mucked up democracy for totally different reasons, but in really profound ways. And then second contact, chat GBT, AI, all this sort of stuff, which we will see in 2024 in a really profound way. I think we need to realize there are entities whose purpose is not to make us a healthy democracy who actually have more power over our democracy than we do right now. And that in a certain sense, we have to find the lifeboats to move us into a safe place where we can do democracy without being mucked up by these really powerful forces. I think deliberation is a central part of that. So one of the things that we've just determined to do is to enable distributed, scalable, extremely cheap deliberation for anybody around the world that wants to build it inside of their community and their college and their game, whatever. We've about to acquire, I'm not sure where we are on this, you're at the center of it, but about to acquire a really powerful deliberation platform that solved a lot of the really hard problems of deliberation. We will open source it immediately. We will then begin to build an opportunity for people to embed it inside of their own infrastructure, an API for deliberation. You're in the middle of a game, push a button, and you're inside of a healthy space for deliberating on your problems, small groups that aggregate quickly. Because think of it as the kind of Google Docs for deliberation, like very cheap, easy accessible and powerful to enable people to engage in this practice. Because I think if we don't learn how to talk to each other again, and to listen to each other in a safe and healthy way, we're never gonna have faith in democracy again. We're never gonna have a conception that there's a reason for us to turn over to ordinary people the project of making a choice. And so this is one way, I think, to begin to build a different conception of democracy, which I hope will include things like citizen assemblies, making really important decisions about what local communities should be doing, maybe even what the nation should be doing, but contexts where AI is not gonna invade and pollute and corrupt and distort what democracy could be. Maybe take us a little further along that dimension of what does that look like? Specifically right now because we are fraught as a country across these questions around deliberation. We were calling it filter bubbles before. I mean, you coined a bunch of these, the kind of key terms in this area. But also, we could take abortion as an issue. We could take the current issues happening right now in the Middle East. We could take a whole slew of other vaccines. And what we've seen also evidence is that when we expose people to certain problems, they actually also get more entrenched in their view or they get more potentially radicalized. And so how does this start to, how do we actually get this rolled out and executed on? Right, so there are two important points here. Number one, it depends on how you design the mix of people in the deliberation. And number two, it really, importantly, depends on the type of topics that you at least begin with. So you wouldn't begin with abortion. You wouldn't begin with the extraordinary horrors in the Middle East right now. You wouldn't begin there. What you wanna begin with is projects that give people the experience of actually recognizing they can come to some understanding to build that muscle. So we had a, the first version of this deliberations platform, deliberations.us, we had a deliberation around the Electoral College. And we were able to produce, through thousands of people participating, consensus around reforms for the Electoral College that were cross-partisan, but very different from people's attitudes when they walked into the virtual deliberation room. Like they changed, their ideas changed, and they changed because they heard other people like them. They realized the other side was not a bunch of lizards. The other side were actually people with similar values and hopes and dreams for their kids. And so I think that the strategy is to leverage the capacity we have to make sure we can build healthy environments, safe environments, and then the muscle that gets built by experiencing and exercising that in that environment. And that's the idea. Spread it everywhere in this healthy way and begin to demonstrate that in fact, it's not true that every group sitting down and deliberating turns out to be a polarized mess. If that's true, that that's what's produced. It's because it hasn't been architected in a careful enough way up front. I wanted to ask this question. If you had the key, you know, you're here at Harvard, you're training some of the preeminent minds, the future people are gonna build these platforms. If you imagine that you could take the leaders of these platforms, the Mark Zuckerbergs, the Jack Dorsey, the others, pick your favorite genre of people and put them back into your classrooms and so that they would have the ethos of knowing what's coming ahead. Not only on the platforms with misinformation, disinfo, but the potential of what AI is going to do. What are the key things that you would want to instill into their knowledge base and understanding so that they could go forth and would feel confident that they are going to be good stewards of these platforms and maybe we'll just go across this way starting with you, LaTanya. Well actually, we do have an army of such students that we have a concentration area in government called Technology Science and we've had hundreds of graduates and we teach them to be public interest technologists and they've gone on and done amazing things. Laws have changed, regulations have changed, business practices have changed because of the work they do. What do we teach them? We teach them how to identify technology society clashes and if they're working at the point where technology's being created, their goal is to do the hard thing. The hard thing is how do I look at how this technology goes wrong? Who is it that what I'm building for, it doesn't work. It just won't work for them and what is it that I can do in the design of the technology to combat that problem? This has been very powerful as students have gone on because a lot of times the clashes between technology and society we see that we've talked about, we see them when the product is in the marketplace but a lot of times it could have been solved so easily in design and then opportunity is lost once it's commercialized. But if you can catch it during design, you can modify it. I mean it's not an easy task because already it's hard to produce some new technology that doesn't exist already so you're asking them to make their work harder but if they do that they can bring to bear technology that is gonna harmonize more with society. James? Well if I could get some of these tech titan CEOs in the classroom I'd basically force them to take a liberal arts curriculum. I mean the reason I say that is because and this is changing with the younger generation of engineers which is heartening but I think there is still a tendency among some engineers to say I just wanna do the maths as our British friends would say. And they say well the implications of what my technology will have upon society that's an upstream policy problem. That's for someone with a different job title than engineer or technologist and that's just completely false. We no longer have the luxury of, I mean we never had the luxury of thinking like that. And so when I think about artificial intelligence and when biases are encoded in training sets think about image recognition algorithms that can't classify women or people of color, things like this. How did that happen? These weren't intrinsically technological failures. There are failures and in many cases sort of empathy or moral imagination. And so I think that it's really important when we talk about how are we going to train engineers, the next generation of engineers. We have to think about well it's not just about the science and the engineering. It's also about understanding how your products will be embedded in the outside world because it is a position of great privilege to be able to say I can just build things and then not worry about some of these externalities. That's a position of great privilege and so technology is amazing just to be clear. I am a technologist. I remember when I was a kid I would see stuff in the science fiction movies that are true now. It used to be on Star Trek. You could talk in one language and then get translated to Klingon or whatever. That is sort of objectively amazing but it's also objectively disturbing when we see for example algorithms being used to encode pre-existing societal biases in terms of who gets mortgages, who gets sentence at parole hearings and things like that. So I really think there is a big issue of increasing awareness among engineers that you're not just making widgets for the shareholders. You are making some key foundational pillars of what we hope will be a healthy society and you have to think about that explicitly. You're not just going to automatically fall into the right thing that just so happens to serve the public good. I'll just share real quick before we get to Larry. One of the greatest ass-kickings I had from President Obama and since he's not here, I could share it, is we were building a precision medicine initiative and he had said, you need to make sure that people are going to be impacted or at the table. And so we did a little bit more of the classic thing that we do in Silicon Valley. We got groups that represented those population. We created personas. We had the rare disease network and et cetera. And we came back to him with what we had learned and he said, do you have the people at the table? He said, yes, sir, here's what we've done. And he said, I thought I was clear. And when the president says something like, I thought I was clear, your day's gonna suck. It's not gonna go well. And we went back and what we realized is one of the fundamental flaws as builders of technology is we use personas. And personas are the equivalent of like going to a photo frame shop place and you see all the people in the photo frames and everyone's smiling and their eyes are open and you're like, that doesn't happen in real photos. And when we got the real people around the table, it fundamentally changed the way we built and it was a really important eye-opening lesson that I wish I had had substantially earlier in my career. But I'd love to hear your take if you can transport all these people into your classroom. I think we have to build a discipline to challenge happy thinking. When you listen to these technologists talk about what they were gonna build. I remember 20 years ago in Silicon Valley listening to them talk about what they were gonna be building. There was all this happy thought about it was gonna be the best of all. It was gonna be extremely profitable and make society a wonderful place. Rather than recognizing the deep conflict that often exists between the business model and the social objectives. And I think the best place to see that is in fact in La Tanya's archive of the Facebook files. I had the honor of representing Frances Halgen when she first came out and helping her in the steps that led her. I didn't do the training, but the people who helped her stand up turned her into an extraordinary spokesperson for the tension that she had experienced inside of Silicon Valley. But if you look at the Facebook files, you will see all sorts of examples of engineers, really good, serious engineers, raising their hand and saying, hey, we should do this and this and this to make this platform safe. Or we can't do that because it's gonna lead to all sorts of bad consequences. Again and again they were doing the moral, ethical thing for that platform. They were raising that issue and again and again they were overruled by the business model. Overruled by people who said, no, no, no, our objective is to maximize engagement. That's what we gotta do. That's what Wall Street says we have to do and that's what we're gonna do. And I just wonder how many times there were these engineers who at a certain point realized their whole presumption about what their life was gonna be was false. Their whole presumption that they were gonna go out and do good was contingent upon good being consistent with a business model. And the reality is that's not happening for some of the most important things that we need in society. We need news that is trying to help us understand the world. Not that's trying to maximize the amount of time that you're gonna be spending going down rabbit holes about all sorts of crazy stuff. But the problem is the business model of the people who are providing us news right now is trying to figure out how to get you to go down a rabbit hole and to spend all your time looking at all these crazy stuff rather than helping you see the issues and all their complexity and how you're supposed to be dealing with that. And if we don't, it's almost like, so Martha Minow's father was one of the most important people in the arc of American news developments when the head of the FCC in the beginning of the 1960s remarked about the vast wasteland that was television. It's kind of hard to imagine compared to where we are right now. But his speech triggered an extraordinary rethinking of what news would be. And it led to a period of 25 years of a really important ability of us to understand the world around us, not completely, like there were race issues that were not discussed, poverty issues that were not discussed, sexual orientation, not even an issue according to that view of the world. But still, it helped us understand because the FCC basically said your business model is not gonna worry about returns from telling the story about the world. You have got to make that part of what you do. We can't do that right now. What's the role of the, this is a preeminent place for business school, law school. We've had lawyers who've come out of Harvard who kind of represented tobacco or junk food. That's kind of a similar, maybe not an imperfect analogy. We have people come out of the business school who go to become venture capitalist, who are the people who can overrule the engineers as product managers or are going to go start companies. What's the role for them? And those parts of the institution here? So I think one really important change that should happen tomorrow is that engineers should begin to have an ethical obligation that's imposed upon them as engineers in the same way that lawyers do. In the context of the Trump cases, there are many, many examples where the lawyers would not repeat what Donald Trump was saying because they had an ethical obligation and they knew that they could be punished for saying false things the way Donald Trump was saying them. I think if engineers inside of Google or of Facebook or if there are engineers in Twitter anymore, I don't know, but if these engineers had the ability to say, you know, I just can't do that because that conflicts with my ethical obligation as an engineer and you can't tell me to do that because that's illegal given this is my ethical obligation. We could begin to put meat or power behind the idea of ethical constraints operating in the context of technology and that just doesn't exist right now. Did you want to comment in here? Yeah, I did. I want to say that also the obligation, so when we think about public interest technology, we purposely say technologists, we don't say engineer, we don't say computer scientists, we say technologists because it turns out that these technology society clashes can be seen anywhere among the technology lifecycle and who can intervene, who has the power to make the decisions changes as you go through the lifecycle. Only in the very beginning is it the engineer or the computer scientist, but then somebody's got to figure out how to make money on it and so now the business package comes in. But if the person who's crafting the business package also has this same eye out for what are the technology society clashes and we give them tools of how to look for them and how to resolve them, then we come out with a business case that doesn't have the clash. If it gets into the marketplace, then we need regulators, we need policymakers and others who know how to do their job using those same kinds of, this is a clash, what are the tools that I can bring to bear on it and so forth. So what we have found over the years is that actually we reach out to disciplines around the school and some of the most amazing work have not been, well, we've had the amazing work from computer science and engineers and statistics students, but also from history of science, from psychology and so forth who've just made huge changes. These students, their work has made huge changes in all of our lives. They've changed practices around prices and so forth, new laws and what have you that have just dramatically improved the fabric of how we live. The problem is one school can't do it all even if you reach out to all of the disciplines around the school. And so that's why we have the Public Interest Tech Network which is like now 80 schools trying to do the same thing. It's a great point that you raise is I went around over this last year and I interviewed some of the seminal data scientists. This was all released free to the public on LinkedIn in the next couple of weeks. And the common thread that they all said is essential for a data scientist is liberal arts training. They said that's the only place you're going to get these skills currently and they need to expand that. You also brought up this interest which I think is a real appropriate word is clash. We have these clashes. Whereas we're entering the middle of the third industrial revolution, we've had clashes on privacy, we've had it on other areas, social media, our views. We're entering AI now. What is this clash going to look like? Do we need a czar at that federal government level? Where it is? Because at the privacy level, it's unclear exactly who owns this. It's sort of a hodge podge. And so I would love each of your takes. I mean, James, starting with you would be what should we be doing as we enter this next clash? Well, it's hard to figure out what the right answer is in the same sense that when we were talking about moderation in the last panel, really there's never a perfect moderation strategy. Instead, there's a series of decisions you can make, all of which have badness in some aspect and you have to pick the one that you think is best based on the context. I mean, even though I've been sitting up here and criticizing technology and kind of old man, get your football off my lawn type thing, I think technology has a lot of promise. I think it is also possible to over-regulate industries as well. And I think it's important for people to think about because sometimes you hear policy suggestions that are very invasive in terms of how companies can move forward and releasing new technologies and so on and so forth. That being said, I think there are interesting analogies to, for example, the environmental protection reviews that you have to go through if you want to build buildings in certain states or certain locations, whereby you're basically required by the government to do some form of due diligence before you go out and possibly release something into the world that may cause any number of harms. And so I think that there have been a lot of interesting proposals in the AI space around things like red teaming and things like that, basically getting people to go in and essentially try to attack the model, try to trick it in and saying things that are racist or sexist or so on and so forth and give those results back to the model creators. So I think things like that are great ideas, but I don't think that anything that any of us would come up with today on this panel is gonna be perfect. I think the big thing that we have to try to encourage industry to think about is that they should be thinking about these things and that we have to work together to try to figure out what works well, what does not. So instead of just saying like we're just gonna pass one big bill that hopefully is just gonna be the end all and be all of AI regulation, let's say, I think that we should do a couple things. We should be bringing together not only people from government, the regulators, but also people from industry, academics. That's one of the things that we want to do here in this new lab to bring together those people from across sectors to start wargaming some of this stuff. And to be honest, I'm not quite sure what the best approach will end up looking like, but I do know that the only way we're gonna get to a better state is if we include voices from a lot of people, not only technologists, but also social scientists, regulators, and importantly users, regular people, who will be impacted by these technologies. We only have three minutes left. I want to go to Larry, I want to go to you and then Latanya and then finally do a quick 30 second wrap up is we're looking six months out post-election of 2024. What are the key things that we want to make sure if we're looking back in retrospect that we would have started today? Well, I think that the threat, the foreign AI threat is huge. I mean, we're gonna see the first round in the Taiwanese elections in January where the Chinese will deploy AI against those elections and they have a very sophisticated defense system. Audrey Tang has been really powerfully effective in building that defense, but it's not clear what will happen. And that's just the first round. They'll then come to the United States in 2024 and not just that. That's a warm up act. Sorry? That's a warm up training act. Yeah, that's a warm up training act. So what we need to do is look at what happened there and then scale it up, orders of magnitude to protect us in 2024 because it's not just going to be the consequence of Facebook trying to maximize engagement. That's bad enough. It's also gonna be intentional foreign actors eager to screw this up in a really dramatic way. And we're totally vulnerable to that. I mean, we have not stood up a fraction of what we need to protect ourselves against it. Whose job should that be? Is that the president at the end of the day? Is it CISA? Is it Congress? Whose job is it? Well, if it's Congress, we're in real trouble. That's why I was gonna go with it. It's defense. I mean, I kind of think, we used to have a defense department then 9-11 happened and discovered it didn't really defend us. Then we set up Homeland Security and that does a bunch of things. I think there's like a need for a digital defense department. I was talking to you about, I wanna teach a course in the next fall. It's gonna be called Digital Defense Department. What do we have to build to be able to defend ourselves against the range of threats that we understand the internet has introduced? Not just foreign threats, domestic threats, fraud and all the insecurities that are built. And the intuition about how to do that requires understanding the relationship between technology and policy, the kind of thing that your students, I think, have intuitively. Latanya? Yeah, I know I have very few seconds here left. I think six months after the election, we'll be, we'll wish we have gotten the energy and the collective will together to really earnestly tackle this problem. This is not the first wave in this technology society clashes, or the second or the third privacy was the first wave and we haven't solved it. But now we're literally at the brink of huge, huge disaster going forward. In 2016, my students were first, we were first to find these persona bots on Twitter. They looked like real people. They acted like real people. They only had a few followers, but all of their followers were human. And later we found out these were bots were put there by state actors. Now everyone, anyone can do the same exact thing through generative AI. They can make impressive websites and so forth that look like news websites so that when your followers click the link, they get reinforced. They can choose keywords that when you Google that phrase, just because it's worded that way, the only the false information comes up as the first hit. And when you realize how easy that is to do at scale, it means that we have a real problem. How do we know what to trust? How do we not get information just home in on us? And lastly that the AI models are only about us. They're about American public. So it's not like we could turn around and use the same kind of approach on another country. 15 seconds each. What do you want the public to do? Why do I always get these hard questions first? Let's see. I think the main thing that the public can do is learn, educate and become more empathetic. I think the last part is particularly for technologists who haven't always traditionally been as empathetic as they should have been to all the people who were affected by their technologies. Larry and then Latanya. Slow down. I'm a big believer in the slow democracy movement like the slow food movement, but start understanding the world not through Twitter slash acts or Facebook or these fast media sources. Start listening to podcasts, to long form journalism, to efforts to make the complex understandable. Latanya. I would say engage. I mean, people are trying to come up with alternatives and other ways of doing things and part of the way we got here was blinders, just believing in the shiny new thing and just running towards it without paying attention to where we're clear signals all along. I don't think that's the way forward. We want you to enjoy the new technologies. I got my aura ring and so forth, but at the same time, we have to be aware that I don't know all the places my aura data goes, for example. Please join me in thanking our panel, Latanya Sweeney, James Mickens, Larry Lessig and we're gonna have a panel that is going to follow on on top of this great effort in just a minute.