 Thank you so much for being here today. We really appreciate everybody coming out. We do have a little bit of some unfortunate news to share. I have Jason Goldman here, the senior advisor to the president for technology. And we'll hand it over to you, Jason, and then we'll talk a little bit about programming. Hey, everybody, really glad to be with you today. Unfortunately, President Obama woke up with COVID-like symptoms this morning and is not able to attend any events on his public schedule for the day. So he's very sorry to miss this because he is deeply committed to the work of the lab. And a big believer that this conversation in terms of how the future of the internet needs to evolve is timely and completely relevant for the moment that we're in. I'm looking forward to participating on his behalf and the programming of the event will continue. Thank you so much. We know folks are disappointed. We are so excited to have our senior members of Leadership for the Lab across the university and the one and only Jason Goldman here to participate in the conversation today. I would love to welcome to the stage that the Dean of Harvard Law School, John Manning. Hello, everybody. Good afternoon and welcome. We're all disappointed that President Obama couldn't be here today. And we wish him a speedy recovery and we look forward to welcoming him back to Harvard another time. We're here today to celebrate the launch of BKC's new social media lab and to do so with a terrific panel on the future of the internet featuring the faculty director of the Berkman Klein Center, Professor Jonathan Zittrain, Kasia Shimolinsky, the co-founder of the Data Nutrition Project, Joel Roth, former head of trust and safety at Twitter, Tracy Chow, the CEO of Block Party, and Jason Goldman, the chief advisor on technology to President Obama and former chief digital officer at the White House. Thank you. Well, thank you so much, John. Thanks to the Obama team and to everybody. It's a slightly different Block Party today than we were expecting, but we have a lab to launch. We're going to launch it right now and can't think of a better group or setting in which to do it. The applied social media lab, ASML, is an effort to increase the breadth of the conversation and the action around imagining a better set of technologies than the ones we have right now. We should just take a poll of the room. How many people would say they are thrilled and delighted with the current state of the internet? I saw like a hand tentatively go up and then go back down. Thank you for holding dissent in the sphere for us. And it's so easy to get resigned to it to even have that feel like a kind of complacency. And we're hoping together to get beyond it and to include among so many folks who have been in the belly of the beast who have worked in one form or another within Silicon Valley who have been both excited and at times bemused, even terrified with what they're doing. And part of the function of the lab is to give a chance to be in a different environment, swim in a slightly different bowl and build in ways that pure economic incentives of the sort that are on display in my Twitter, sorry, X4U tab, which, should I ask how many blue checks are here today? No, let's not do that. But no offense, if you have a blue check, we'll take $8 too. But to think about what other ingredients in the stew can make for something beneficial, uplifting, to capture some of the promise of those early days and to have this group of folks here to talk about that both then and now and what next and to have the folks that are in the panels that follow ours, our faculty panels and such, immediately following is just such a privilege and we're so glad to be here and we will surely hope that the president will join us. Again, it's not like these problems are going away to continue the conversation. So with that, let's get into the panel. Dean Manning already introduced Jason. Thank you, Jason, for being here. Let me turn now. Well, actually, let me ask you, Jason, when was, what was your first internet experience? I was, my first internet experience was a 1200 BOD modem in St. Louis, Missouri to local BBSs. 1200 BOD was probably, that would be 1995. Yeah, I think even 1994, yeah. And so I was, I grew up in St. Louis, was a big participant in the bulletin board systems of the time from there, kind of discovered IRC, which was like a big community early, and then built my first web page, which was very pretentious in 1995. It was like a quote. Did you get the domain name? Very pretentious stuff. It should have been called that because it was like a quote in French from, like, waiting for Godot, and each line in the quote linked to a different part of the website. Is it still there? No, I can't find it. It's not even in the web archive. No, I think it was too early for Brewster. So it's like, it's really, it's thankfully lost to the sands of time. And then, yeah, discovered Usenet and other places then. Better to ask how old you were in 1990, if you would describe it. I was like, so I would have been like 16 at that, yeah. Yeah. And so I ended up working on Blogger, a weblog publishing platform, and then worked on that at Google, and then was part of the founding team at Twitter. And so sort of my passion became a career as well. And if there was one word to describe your sensibilities about the internet circa 1995, what would the word be? Portal? It's like always the word that comes to mind. Like it was like the idea that you opened up this magic scrying glass into another person's experience and we're able to see the experiences of other people on other parts of the world and understand and hear from people in this, like in their native vernacular in a way that you've never been able to. And growing up in St. Louis, you feel like everything's the same and suddenly seeing people from all over. That's great. I feel like there's just a bumper sticker. Like there's only one letter between scrying and crying. Yeah, that's right. That's the theme, but we're not just going to be pessimistic. All right, terrific. Thank you. Dr. Chimlinski, did I pronounce that okay? That's great. I'll take it. You've led a career and a life so far of what might be called digital public service. You've done time. Is that the right way to put it? At Google and McKinsey, you've been at the MIT Media Lab where you worked on the team that developed Scratch. You were among the initial cohort of the U.S. Digital Service, stood up within the U.S. government during the Obama administration and with us today, the U.N. Office of the Coordinator for Humanitarian Affairs. And then you have started a number of projects in the public interest for a better internet. What was your first internet experience? Wow. So I was a really basic kid. I think that... You said basic? Yeah. I feel like my internet experience... Was that an assignment with kids that were basic or called basic kids? No. Not basic the programming language, although that was part of it. I also think that this question is just a low key way to figure out how old each of us is. That's right. It's like a BuzzFeed quiz. Exactly. That would be the How Many Bod Modem question. Yeah. So I think my first experience was really... I remember two things. One was chatting with my friends. There was these little AOL disks that were sent to your house and you use that and dial up. If you don't know what I'm saying, just ignore it all. It's fine. These AOL disks, they were like dropped out of helicopters. There were so many of them. Yeah, they were amazing, right? And they were this kind of portal to get you online. And so I remember chatting with my friends and I remember downloading music. And I'm pretty sure, in retrospect, that it had nothing to do with actually chatting with my friends and music, so I found it so delightful. Because the chatting... I was really chatting with kids that I was seeing every day. And I was a very slow typist. So it would have been better for me just to chat in person. But there was something really magical about being able to represent myself, however, choose a username, choose a profile, and something about identity there. And then on the music side, it definitely wasn't about the music because it took like a week to get a song. But there was something, again, really magical about the notion that hundreds or thousands of people were contributing little pieces to that file. The peer-to-peer community. Yes. A.A. the pirate crowd. I didn't know it was that when I was that age. But sure, you could call it that. So I think those two things really highlighted to me the power of the internet. I wouldn't have said it as such. It just felt like magic that these things that had no analog in the real world I could do online. Yeah. Boiled downable to a word the way Jason did with Portal? I mean, probably magic. That's how it felt when I was 12 or 13. Yeah. Terrific. UL Roth. One of the best line of UL's LinkedIn is he was a genius from 2008 to 2011, just like, huh, which he meant literally under that it said fixed max. So you fixed max is a genius. It was my favorite job title by far. And I wish I could say that it was a safer profession than the one that I entered into later. But it totally wasn't. I'll always remember the day when somebody with a broken iPhone approached the genius bar and was so irate about the fact that their phone had broken that they just chucked it at me. Kind of answers the question of how it got broken to begin with. Yeah. I was sort of led down a certain path of understanding this person's relationship with technology. And with their fellow humans. But yeah, it was really this moment throughout college. I worked for Apple fixing people's things. And it really shaped how I think about the connection between people and their technology. I was there when the first iPhone was introduced. I remember the sort of immediacy with which people connected with it and the feeling of transformation. And I had a really cool job title. I got to tell people I was a genius. Also interesting to think because that that sort of job has expanded and proliferated of trying to be helpful when other people are in extreme unction and when there's some mess and then there's a crew of people we expect to be smart and have a smile and just clean it up. And as you said that your path included a PhD in communications and then ended up ahead of trust and safety at Twitter. Made some mistakes along the road. Yeah, I'll leave it. I'll leave it at that. All right. We're going to have to talk more about that. But what was your first Internet experience? You know, I was all set to talk about the 14-4 modem that I remember getting. It was a three-com. It was very exciting. It kind of doesn't make you wonder why the modem people didn't just start at 14-4? Yes. Why did they build up? But now that I'm not sharing the stage with President Obama, I get to tell the fun story that I was nervous about telling. And I hadn't thought about this in about 25 years, but I remember when there was this new thing called Google and I remember being in the computer lab in middle school and I was really upset that the school district's proxy server blocked Google. And I remember thinking, God, this censorship. And so, you know, young YOL starts thinking, what does one do in the face of censorship? And my answer was scamming the admin password out of the person who ran the computer lab so that I could configure the browser. Social engineering. Absolutely. Yes. And then I remember the magic of it. I'd been using Lycos and at the time there was this dog pile which was a search engine aggregator. And then all of a sudden there was Google and it felt transformative in that moment. And also subversive because I had to social engineer the admin password out of the person who ran the computer lab. Were you caught? No. Well, that person is here today and is ready to... If this panel turns into, like, the history of each of us confronting the folks that we've done, that would be great. Very good. Tracy Chow, computer science, machine learning, and artificial intelligence, graduating from Stanford, can I say the year? In the late aughts, which is to say, a time before AI was, you know, everybody's AI now, but you were AI before everybody was AI. That was in the period when they thought neural nets didn't work until we didn't study them. Yeah, yeah. Well, the jury's still out, but you were employee number four at Quora. Wow. And a similarly low badge number at Pinterest and then off to the U.S. digital service. Talk about a pivot at an inflection point. And then block party. Yeah? So what was your first internet experience? I'm going to go against the grain here and say I was pretty unimpressed. So my parents were both software engineers and so my dad had computers everywhere in our house. So he was very excited to get me online. It was prodigy, I think, like the little dial up, beep, beep, beep, beep. And then I think there were some games that could in theory play with other people around the world. I was like, why do I want to play chess with somebody I don't know? It's just kind of weird. I'd rather play chess with you. That was my first experience a bit unimpressed. But I think it's maybe characteristic of being still pretty young then and feeling like technology is just in the background is just part of life. It's not something that unusual or different. Yeah, it's amazing how much we can get used to the status quo in new generations. Of course, we grow up in it. It's different. What would then your one word be? Math? Underwhelmed. Underwhelmed. And Yoel, we didn't give you a chance. What was your one word? I mean magic. Oh yeah, you just said magic. Sorry. You can never get past that first, I mean I guess Tracy never had that first feeling of actually being impressed with the internet. Still waiting, still waiting. My feelings have soured somewhat but there was that early moment of really being enchanted by the thing. Yeah, I suppose there's a regrettable magic as well. And do you want to say just a word about Block Party? Tell people what Block Party is. I started Block Party to build tools against online abuse since I was getting a bunch of abuse on Twitter. It was great and then Elon took over Twitter and we had to shut those down and now we're building new tools to help people stay safe online. That was an incredibly pithy tweet length description of it and we were going to want to hear more about that but let's now turn back to you Jason and I don't know if you know what the president's first internet experience is but I already asked you yours I guess you were in the Obama Administration as Chief Digital Officer so when was that? I came in at the beginning of 2015 and I was there until the day before inauguration in 2017 so I was there at the last run. If you could enter I'm now going to use the words that were shared a portal and through magic somewhat unimpressively go back to 2016 and there's you 2016 you what would you whisper into your ear after reassuring yourself that this is normal? Well I think what I was working on in the Obama Administration was we were standing up a lot of channels that the White House hadn't used before so like before the Obama Administration there was a website obviously going back to the Clinton Administration but it was in an active place where you get news there was never a Twitter account for anyone at the White House we launched a Twitter account for the president at POTUS while I was there and generally tried to bring the White House and I think that was an appropriate focus for that era of the internet I think from a reaching people standpoint the thing that that misses is that the way in which the internet has evolved has shown that all media is really niche media and that what you need to do to connect with people is find where they're already hanging out online and engage with those audiences there and I think a lot of what communications generally has pivoted to is instead of launching your own channels partnering with the people who already reach an audience that you want to talk to and that's true in politics and that's true in other contexts as well that totally makes sense as an answer and of course that's an answer in the context of your job figuring out productive outreach across all channels in the digital space but you've been thinking really broadly about technology so maybe I should refine the question to be is there anybody else that would agree you would want to go back to in 2016 in the federal government to give a word to the wise what would that be I mean I think the certainly if I could go back to 2016 and be like listen the pandemic response you really need to make sure that you work on that make sure those plans are well set so like I would prioritize you just like put that in an envelope for January 2020 let's really think ahead there and in fairness they've done a lot of that work but the I think the the thing that brought me into the government was President Obama said to me you've worked on a lot of systems that have built tremendous shareholder value but you've never worked in a place that's inherently inherently for people and for the public good that's in a value positive sense and what does it mean to evolve these technologies in a way that's inherently more positive and is infused with the values of democratic culture and so I think that's part of what we tried to do in the administration I think knowing the challenges that particularly social media would be under in the 2016 through 2020 lens I would have probably encouraged a much more robust engagement with industry to be like hey there's going to be real challenges that exist in a value space and we need to try to encourage industry to put a marker down in terms of what values they care about now. So let's do a bit of engagement with industry right here so I'm going to ask you YOL if it's okay to imagine being back at twitter let's suppose Mark Cuban owns it now or something and there's going to be engagement between you and Jason about the public is this time traveling Jason or are we back in sort of the present day? I feel like it's getting too confused if we're hanging out in 2016 so let's come back to 2023 if that's alright and I'm curious first YOL your sense and what you would recommend to a new CEO of twitter or if you came back as CEO how to infuse the kind of public interest values that Jason was talking about while also having to think about your share price and your shareholders or I guess Elon doesn't have public shareholders now but we digress how to reconcile the public interest stuff with the profit stuff and what the responsibility of the company should be curious your thought on that and then maybe just any dialogue you too would have over that for a moment you know I think what we've seen not just since Elon Musk bought twitter but across the history of social media is a profound failure of institutional trust I think we've kind of built these platforms and they have all sorts of cool features and they make a bunch of money twitter never so much with the making money thing but in theory platforms make a bunch of money and we haven't built them in a way that engenders public trust there's communities of in Facebook's case billions of people on these platforms but there's no real sense that there is legitimacy to the governance that these platforms are exercising and twitter wasn't perfect at this when I worked there twitter has never been perfect at it nor has any other platform but I think we've seen an even further erosion of that trust in the platform in the chaotic ways that it's been governed since Elon Musk's takeover and I think the solutions to that are straightforward we know what builds trust in institutions we know that that's communication we know that it's being forthright about how you are governing things we know it's accountability with data that is externally auditable and what I would encourage whoever owns twitter or whoever is building the new twitter to consider let's address this to mark cuban you know think about trust as the object that you are trying to pursue think about what you need to do to build a system that people can understand that they can audit that they can feel legitimacy and that they are participants in and unfortunately I think all of that is sorely lacking from the twitter of the present day and in various measures was sorely lacking for years and years prior as well Jason yeah I think I think you all I unsurprisingly agree I think the thing that the government would say as well or you know that President Obama would say as well is that it is trust and I think particularly you hit on the concept of the need for transparency of not just I think for too long on these issues the companies have said we are the world's experts on these topics we know more about these problems than anyone else we've hired all the smartest people to work on this and we've got it and I think notwithstanding the geniuses that these companies do bring to work on these problems that's just not going to be a sufficient answer you're going to have to let other people grade your homework you're going to have to let other people look for harm that maybe a for profit business isn't incented to look for itself which is why President Obama and as Stanford speaks talked about the platform accountability and transparency act as like a specific legislative measure that we think at that time would have made sense for platforms to be forced to do more of this collaborative exercise and sharing what they have now the riff that the two of you just had kind of is dwelling in a platform that should have an attentive team a transparent set of policies probably a consistent one and then applies them with some upstream values that themselves are announced of course there may be a lot of folks in 2023 for whom they just work backwards from the outcome and if somebody is blocked that's on their team that means it must have been a bad decision their trust level goes down and then if something goes through that you think shouldn't have gone through you feel the same way rather than feeling that the system is good this is maybe a way to bring you back in Tracy because is block party meant to not have a one size fits all who were the blockies and what is the party the party is everyone so block party is built on a philosophy of user empowerment and allowing individual users to choose what they want to see what they want to engage with and be in control of their experience online and so you no longer have to rely on a single central authority which might be Twitter's moderation team or Facebook's or Instagram's to make all the determinations of what is acceptable or not and what is what everybody is going to see but each person gets to choose and so actually tying back into some of the regulation questions I think going beyond transparency one way that the government can help is forcing a level of openness such that additional developers third parties can build these solutions that work on behalf of end users so just map that out a little more for us this is like I guess block party is not functioning right now not the Twitter products our code is all there we're waiting for but on a day in which the connections to Twitter are open how does it work so you would sign up for block party configure your preferences you could say I'm pretty open like let most things through I want to see stuff I want to engage with the internet today or you could say I need a break please filter things more aggressively for me I think that's the mode that you're all is on this product by the way made Twitter usable for me and for many many others and it's truly a profound loss that Tracy and block party can't do the incredible work that they were doing thank you you all for the testimonial save that one for our advertisements but we would just run in the background block party run the background to automatically filter out mute people how do you all decide I get that Jason might set the dial one way and you'll and Kasha said it somewhere else but how does it know and how to reflect what the dial is asking for it turns out there are really simple signals you can pay attention to such as does this person have a profile photo or are they an egg did they just create their account and all they do is tweet at people who post on a particular topic how many followers do they have they have zero followers they're probably somebody you don't need to listen to so you have a not so secret sauce that uses it sounds like so far content based ways of finding dodgy characters user behavior and user characteristics can be much more informative and simpler to understand so we made these auto meet reasons we call them very visible to end users and they're configurable so for the people who care they can understand what's happening underneath the hood which is actually a very nice property of the system which we don't always get with machine learning or AI based systems that do some magic underneath the hood and can't be understood it reminds me back in the days of ebay some accounts would have sunglasses next to them which I thought meant they were cool and I would buy stuff and it meant they were shady because they were new it's just a HCI problem I think we would call it but those kinds of things let people who decided to use the block party add-on to do it and that was both it sounded like an aspiration coming from your own experiences and it was a business yes is a business people paid subscriptions to be able to clean up their experience and then it stopped because twitter basically wanted to charge a bunch of money to let the data flow so that you could have your the pricing was prohibitive there was essentially no desire for twitter to allow anybody to continue building on their platform last question on that I thought that it was also possible for me to indicate friends on twitter and whenever they blocked somebody I automatically blocked them was that part of the deal too we allowed people to share their block list and block and mass so another way that you could use the tool was block all the people who like to tweet or retweet a tweet especially if they were saying something nasty about you very efficient way to find those people and block all of them fascinating which might mean people would encounter accounts they haven't interacted with and find themselves blocked and there wouldn't be for them an explanation it would just be thank you for playing got it all right I mean you're getting out like an interesting point I think when you ask about sort of the work that they're doing I mean you've written about this before in terms of community based moderation and that is like another way forward and so I think part of the question you're getting at is what would be the things that government industry and others can do to encourage like the comments that would allow community based block party was prohibitive in some way if not legally then like some on industry norm it was you know it wouldn't be allowed like what are the ways in which we would sort of protect those comments and enable that kind of innovation because I agree with you in the arguments you made about this that this represents a positive direction and a positive way forward that takes us out of the dichotomy of government censor speech or one person who owns a platform censor speech and so I think the next level question is what needs to change so that what happened with block party is anomalous and not and not just the way in which the single owners would react well that I was going to say that teased up a great question for Kasha because you think about the overall ecosystem is the vision that Jason described and that Tracy's work represents is that to your eye a good vision and is it attainable what would it take to get from where we are to where that could happen yeah I mean I share the vision I think it's not surprising we're all up here for a reason I think that it really comes down to enabling meaningful choice for consumers so the work that we do at the data nutrition project is we build nutrition labels for data sets the analogy there is essentially if you walk into a store and someone says you should eat this food it's really good for you and you say what's inside and they say I can't tell you but you should eat it it's really good for you we as consumers now have been trained to expect that we can have access to that information and that we can turn around the package of cookies and we can look at it and say is this something that I want to eat and also if I have allergies is it something that I can choose not to eat that someone else might choose to eat and this kind of gets to your question or your feature of user selection and preferences I might choose something different from you that's fine so I think it's a really good one and it comes down to me this notion of digital public infrastructure so what are the kinds of things that we should expect of our tech companies and platforms and solutions that are kind of similar to infrastructure that we have in the physical world the roads and the sewers and the water and the electricity and things that we might at least in some areas of the world take for granted that someone is making sure that those things are safe and available for us those same kinds of things should be available for our technical platforms and our solutions and if we're really going to hold to that metaphor who does build that stuff is it still companies that built them is it government that builds them I think it kind of depends on what you're talking about I imagine that some things you'll want to have more control over if not in the build than at least in the regulation some things might be public-private partnerships and maybe there's some component I honestly like I believe a lot of what government can do are set the protocols and standards and expectations of what then the technical companies need to adhere to so an example might be phone systems or telco these kinds of things we should all be able to SMS each other it doesn't matter if I have T-Mobile or you have whatever else they've all merged so I forget which ones are different but I should be able to send a text message and you should be able to receive it why is that not the same thing with all of our messaging platforms and all of our various other kinds of technical you know our software and our services which is so funny because when you ask really just straightforward self-evidently you know yeah why not questions like that I think that was a 25-year effort to make phone numbers interchangeable and portable and you know Tom Wheeler I don't know if Tom's here today but over at the Kennedy School FCC chair under Obama like worked on Nancy the North American numbering consortium that's there making sure you can port your phone number and the European Union came in so that at last iPhones have a USB-C connector I mean it sounds pretty technical but I guess it means you don't have to throw out all your chargers if you no longer like iPhones kind of thing and you're saying those sorts of things for interoperability would be good I don't know if others have a sense of a vision between centralized platforms where you kind of just pick the best one and that's kind of your friend graft and where you're going to live for a while like the original choice you make of Apple versus not Apple versus these distributed systems that seem to offer the promise of like if I don't like this mastodon server I can move to another one or maybe I can port my friend graph to blue sky I I don't know this anybody have thoughts on the promise or peril of decentralization I mean it's both right like I think we are at an incredibly exciting moment in the history of the internet because for the first time in 15 years there is actual serious innovation happening in the social media space and we're seeing a lot of new entrants mastodon is a big one blue sky pebble there's a number of others and we're seeing development of infrastructure that can help make some of these services interoperable that can give people meaningful privacy choices this is super exciting can I just do a quick just snapshot 2023 completely social scientists everywhere like don't do it like non random group how many are on twitter slash x I see a number of hands up how many are on mastodon more hands than I expected but fewer than twitter how many on blue sky more or the more I should be asking about threads how many threads fans but here's the real how many how many people consider linkedin like one of their primary platforms where they post there you go I would like to add you all to my professional network linkedin linkedin is the linkedin is the sleeper the sleeper cell and particularly for this crowd is going to talk about playing the long game yeah yeah yeah I think you know just for the interest of having some friction like I I agree with you all that we're a tremendously exciting moment for social media because it feels like we're kind of staring into the maelstrom of a broader sea change and so it's exciting to like sort of see what might come out of that I have skepticism about some of the federated platforms because I think that they don't because it's so reliant on establishing the protocol and the focus really if you look at the blue sky mission is not about this app experience that people have had fun with but really about building this federated protocol I think when you have that focus you sometimes lose the ability and flexibility to innovate on user experience which is what actually draws a critical mass of people so there's the steve jobs thing of like it just works it's just works it's seamless and then there's the build your own heath kit radio kind of thing yeah it's like and you know the those of us all everyone on the stage you know when you build systems defaults matter and people don't people don't switch off the default so the idea that yes people can build their own graph or they can cure you know and select their own algorithm for what gets amplified that is technically interesting and I am excited that people are going to try to do stuff with it I am deeply skeptical that that is going to be a feature that people opt into unless there could be some way to make it pretty easy to do you know you have a thought on this yeah no I mean I think that's exactly right and more than that like I was trying to fight with you though so let's talk about another failure condition of these systems moderation right Tarleton Gillespie is a professor at Cornell has written that the primary commodity that platforms are selling their customers is not the buttons and features and uploading a picture of your cat it's moderation that's the whole ballgame and what we're seeing with most of these new entrants is that they have to reinvent the stuff that the big platforms spent 15 years figuring out we saw that with clubhouse exactly and so it's good Lord you see platforms that mean well that want to build engaging consumer products stumble headfirst into the age old classic problems that every platform is wrestled with like how do you block people from including slurs in their user names like you deal with that by having a list of slurs that you block in user names but how do you come up with that list well down all the bad words you can think of and then you pass it to your friend international standards organization list of slurs yeah well so that's what I'm thinking about as a solution but we're seeing that every one of these platforms have to do the same thing that every other platform is already done and in the interim people get hurt and harmed because the technology isn't in place to deal with trust and safety so is that a form of public infrastructure Kasha that you'd want to see built I mean I yes or it's just about open source and sharing of information and frameworks I mean so when I was at scratch which is the largest kids social network really on the internet but also it's a learning program I'm just thinking about how we did have a bad words username list and it I don't know actually that a list that's held by the government or by a third party would have been useful for us because the kinds of somewhere the FCC chairs are like no the kinds of bad usernames are things like you know space space space space space poop right and so I mean there are some very like bleep that somebody bleep that but like you know it so the bad words list might be different based on where you are the context but the idea of there being a bad words list and the need for that should be common knowledge and that kind of thing I think in terms of open structures knowledge sharing I think is very important I don't know if that's public infrastructure just knowledge sharing this also touches on what you said before and I think like sort of allows me to bring in the President Obama of it again which is the something I think he's very proud of the work that USDS did and I think was one of the reasons why he was excited to be here was because we have folks who worked on USDS and one of the things and USDS was also the agency that brought me into the White House I sort of started in there ended up moving over and one of the things I learned from talking to the folks who worked on USDS is that government is very good at counting things and making lists of things it's very good at the data gathering and saying this is a canonical set of data for this problem and that's true when there's and that's true particularly in when you're dealing with new technologically innovative problems for example one of the most important things that the government did with respect to automotive safety was simply establishing a standard for how we count highway deaths and building a scoreboard for this is how many people died this year we'd like this number to go down and like showing that to industry and saying this is the number of highway deaths and it's controversial like the Europeans don't like the way we count highway deaths industry probably didn't like the way we count highway you see all the same problems you have whenever you establish a standard but that notion of bringing technologists into government to answer the problems of what is the core infrastructure that we need to hold a better mirror up to industry is I think one of the legacies of the US digital service program but also a way in which government in general can try to engage on some of these questions whether in social media or AI or anything else I still want to figure out if we and if you would figure if others are on the same page about the ultimate goal here I could see if we went a couple clicks further that there might be excitement among say the four or five of us about some kind of federated system so that if you have built the next clubhouse in your dorm room and content moderation was the farthest thing possibly appropriately because it was for you in 20 and then 200 and then by Thursday two million of your closest friends if there could be a plug and play content moderation module of the sort that block party is at the individual level for somebody trying to pick up the slack for what they feel is not the right setting of the dial for a platform is that a good thing it's going to be set it and forget it and it might well be a kind of commons where I don't know where to the complex decisions about what lawful but awful speech stays and what goes how does it get made in that kind of collective system I mean that's the risk right the more that we centralize these systems and it could be in the government's hands it could be in philanthropy's hands it could be in academia's hands you end up embedding different organizations values in that and I worry for the future of the internet about homogeneity as a negative outcome I worry that all of the internet and you know what Facebook is probably right for a lot of people but there are values embedded in Facebook's moderation there are values that suggests that nudity isn't okay and those are not universally held values and when you start to make that the default for not just the existing platforms and their users but for every new technology that emerges I think you start to end up with an internet that lacks the diversity that meaningfully empowers consumer choice and problem and the more formalized it gets the more that government gets involved in defining what those standards are I worry that we start to have this sort of regression to a kind of bland undifferentiated internet that makes it harder for people to find the magical uses that you know as when I was you know a teenager in the closet figuring out what it meant to be a gay guy on the like I learned that online because there were different spaces on the internet with different rules and norms and governance structures and I think we need that I think it would be bad for the internet if we didn't have it Tracy I think oftentimes people don't realize that moderation is a flip side of recommendations and it's not like these two systems operate entirely separately from each other most people are familiar with the concept of the algorithm that determines what you're going to see and they think it's all about what gets done and what doesn't. So I think that moderation is a big part of what is not even considered in the candidate sets and to Joel's point if we have that centralization there's a single algorithm that people cannot opt out of and it's just dispersed across the entire internet because it's been plugged and played we lose a lot and there are those inherent values that are embedded the fact that users people can't choose what they want to see if I go to magazines or newspapers that's great I can choose to read the scientific magazines if I want when I go on Facebook I just get tabloids all day long I don't get to choose that and there's there's no way to opt into a different user experience there and what's also dangerous with this is if the platforms are the central authority they determine that algorithm they can be co-opted by authoritarian governments as well so if there's a country somewhere that wants Facebook to show certain leverage against Facebook they can force the algorithm to do that one thing and so I think the ability for people to choose different experiences to choose these different algorithms it's a very technical term most people don't care about choosing their algorithm but if they could choose a different experience I think you would dramatically change the face of the internet and also make it much more authoritarian resistant or even choose proxies that are like I want the internet I'd like pick your celebrity or advocate or even a blend I'd like something that makes it an easier choice than a complex you get an AI avatar of Ralph Nader that like sort of reads the time I think also we're talking about a culture change in our expectation of technology I mean even this notion of I should be able to choose my algorithm you're kind of assuming that people know that they're getting served something that's different from what someone else is getting served and so I think we kind of want to be able to choose a better way to choose a better technology and the types of things that we can you know when I'm walking around in the world that there's not like someone with a clipboard following me around yelling things at me and saying turn left turn right like that would be crazy right like I have the expectation that's called a digital assistant yeah okay I'm not there yet maybe one day I will be but currently I'm not so I'm glad you just didn't you like totally went in the world that I have a certain amount of privacy that I have a certain amount of choice that you know and when I go online a lot of people might think that they also have that level of choice without realizing that they really don't and so what would happen if we went back to the beginning and said we required that companies allow you tell you that you're seeing things for a certain reason and also let you say I'd like to see this as somebody else or turn off all personalization and you could be looking for what you what you know you want to find when no one else knows that that's even an option right so I think it has to do with culture change too in our expectation of technology yeah it's amazing to think that even the companies themselves at this point may not have in one person a sense of how their own algorithms work at this point but maybe you know I should ask you from where you've seen so much go down how do you think about how we've studied around a lot which I think is if there's anything to credit in it a push back against the user empowerment story because it says that people at a vulnerable time maybe a vulnerable time of life a lot of folks here started as kids as their first internet experience kids are getting online and maybe out of a sense of trying to make more secure school environments they're not getting their introductions through school or through other trusted adults and then they can go down a prim rose path where their choices and what they're asking for reinforcement on could be quote radicalizing and I'm just curious for that story how much do you credit that story what's it missing and if there's something to worry about there how would you deal with it without the kind of parentalism that Kasha is worried about you know I think this is the pitfall of the consumer choice analogies that we're really drawn to and I'll note I think it's a particularly American fixation on consumer choice this is not a universally held view of how people should engage with the internet or how the internet should be governed but what's the other views or view in a word that there are certain values that as a society we can agree upon should be built into our technology and that our democratically elected representatives i.e. in the European commission should express those in regulation and that that should be binding with significant penalties this is where John Perry Barlow said the first amendment is a local ordinance right but you know like we're as Americans we're like yeah you go to the supermarket and you can buy your fruit loops or frosted flakes or whatever and that's how we go through the world and how we expect to deal with things that's not how online harms work I think consumer choice and the ability to sort of say for some things and it completely comes apart at the seams for others at twitter we would talk about the difference between something being a perspectival harm it's bad from your perspective versus something that is a global harm something that is dangerous whether or not you see it and whether or not anybody else sees it imagine somebody posting my home address a thing that I'm sure many of us have experienced like that's a global harm it doesn't matter if I see a tweet it out my address it matters that my address is out there on the internet and in the kind of distributed community network we've been talking about that might be a lot harder to lower the boom on because it's not just three platforms that could potentially stop it that's right and it's also not something that fits in this framework of consumer empowerment right when you hear Nick Clegg talking about Facebook's vision that will talk less about Facebook's choices and it's more about you picking the options that you want like that's great but where is the line of responsibility for the hosts of these platforms and there's a battlefield there and if the edge providers the platforms start to kind of opt out of it because they're sick of getting yelled at in front of Congress which like I get it but but at some point that fight happens is it the infrastructure providers is it cloud flare is it Amazon web services do you feel you've come to answers to these questions really not nobody has satisfying answers to it but we need to be having conversations in these terms where is the line between where consumer choice is appropriate and where it isn't and I think there would be significant descent even in this room about where to draw that line and at the risk of going meta no pun intended who's the we that has those conversations exactly like of all of the failures of democratic governance of the Internet it's that there aren't real structures for even attempting to create consensus around this oops got it let's talk AI I feel like we kind of have to and I don't know Tracy if you want to get us started bearing your AI studies for the kinds of things we've been talking about to what extent is AI a solution a complication both for content moderation is that module maybe I could turn on less a painstakingly artisanally crafted list of bad words like poop versus the AI thing has been trained on all sorts of bad content I know where we can get some and now works pretty well I don't think I have a very satisfying answer but AI is a tool it's just math and data and some models that we build and it can be used as a tool to accelerate the review of certain types of content but there's still a lot of inputs to the AI that we have to discuss and consider what are the values what are the types of things that we're going to say are okay or not what types of decisions we want the model to make what's the threshold that we set for okay or not okay is it different in different contexts do we train on different data for different communities there's still a lot that needs to be discussed so it's not a magic wand that we can wave that makes me feel that's carrying forward the unimpressed in a good way that's maybe like hey it's just a tool and that's if AI were used by the platforms what about the use of AI by participants on the platforms to do a kind of new generation of brigading sock puppeting that I could be corresponding with what I thought were 20 really interesting people who were also this is what makes them interested in my work and we go back and forth and it turns out they're all just chat GPT working slightly overtime and at some point those friends of mine start telling me how important a Rolex watch is I think AI can be very problematic in increasing the distrust that already is rampant across platforms there's this concept of the Liars dividend which is that even if something is a true video somebody could say well but that one's a deep fake and then now there's all this uncertainty paralysis if you no longer know if anything is real or not real so you can choose to believe the things that you want to believe and anything that you don't like even if it is really you say that's a deep fake something that you want to believe even if it is deep fake you say no but that looks very genuine and so we now have an even bigger problem trying to get to our manipulated videos acceptable do we disallow any kind of manipulation do we disallow any kind of synthetic content I don't think the answer is clear Kasha do you have a clear answer Jay-Z do you have a clear answer this is why we have the applied social media lab seriously this is why we have it yeah I mean I think there if we go backwards to again some of some of the expectations that we should have that maybe we don't if one of them is that we should know that we're interacting with an AI and you want to start from that point that becomes a technical problem you can start thinking about actual solutions or building implement something that you can actually implement some kind of intervention to be able to identify what is fake and what is real right but the way that these technologies were built there was never the ability to build using this engine synthetically that there should be some type of a watermark or some kind of ability to then reverse engineer and say is that thing fake or is it is it real so again I don't think that all of these issues are impossible there are technical solutions if we scope them the right way and if there are the right motivations or the right expectations that we have of the technology and we require it I love how it's now the applied time and now it's like the trademark people are like this is what we were put on the planet to do we're going to have marks on stuff to indicate quality or provenance I've had a strange idea going around for a while and this seems a great group to just put it to trying to navigate exactly among the fact that we're not entirely sure what we want we don't trust anybody to give it to us but we need it now and the idea which I acknowledge seems bananas is that when Mark Zuckerberg told a congressional committee that you don't want me Mark Zuckerberg judging political ads for whether they have misinformation or not like that seems weird in a democracy and that's why Facebook will be cashing checks for all political ads that's our contribution to democracy he wasn't exactly wrong he was in a point it just maybe isn't the most satisfying conclusion my thought was what if we had as part of a program students in social studies classes in let's use the American context, American high schools that would be shown proposed ads to run on Facebook at election time and under the guidance of their teachers and librarians and for a graded credit would deliberate together on whether they were so over the edge however the edge is defined that they should not run that it would be a disservice to people to see them and it turns out if you do the rough math there are a lot of high school students there's enough high school students to vet a lot of ads and we'll find out some way to fund it an American rescue plan or let meta you know oversight board was just the beginning here's another 300 million dollars what's wrong with this idea so just to be clear we are outsourcing content moderation to American high school students that's the kind of marketing talk that I just don't know how to do exactly right right right right right I think you're getting at I feel the piranha circling here I like the idea that you've argued for in other contexts of using more jury like mechanisms to do these assessments I think there is value in that and that is effort that we can bring to bear I guess I would say it is not going to be sufficient to have a purely like SETI at home like outsourced system for making these decisions and that what you're going to need is I tend to think of this as like what are the structures where we can bring together all of the right players to talk about the principles at play and that's going to involve governments and that's going to involve industry but it's also going to involve people from civil society and advocacy and I think there's a few contexts in which that's happened before the Christchurch call is a good example for the issue of violent extremism online you know GIFCT was another program that happened under President Obama there are models for how that can happen and I think whether we're talking about assessing harms and threats from AI or we're talking about assessing standards for social media we need to be looking to those kind of structures that allow folks from across society to see where their seat at the table is and how authority has been devolved from the platform companies to some other entities because that's my ultimate advice for the platform to the Zuckerberg quote that you gave because beatings will continue as long as you continue to hold on to that authority because there's always going to be another example of how you messed up there's always going to be another example of well you said you were against the Nazis but look at all these Nazis we found and so you've got to find some way of devolving that authority to some other group and the groups that I mentioned are the ones that I think need to be around the table as opposed to just what you're describing I think could be a part of it but it would be likely in my view insufficient yeah it sounds like we need new tools in the toolkit especially with trust at such lows for the old tools new institutions new institutional configurations things that try to draw the best of public and private and community rather than what sometimes seem like the worst yeah and just to again kind of go back to what I think President Obama would say as he's here he views this as a significant focus of the post presidency like the reason why he spoke at Stanford last year about misinformation issues and the information economy and why he was excited to come here and is looking forward to visiting the futures with members of the lab and why we also have voyagers for the Obama Foundation here and why disinformation and issues of AI are going to be topics that we discuss at the Obama Foundation Democracy Forum in November like this is this is he views like sort of using his post presidency as a way to think about what are the structures what are the convenings what are the ways in which we create these lines of effort because it's going to require a lot of it's going to require participation from all those groups I mentioned other thoughts well we're nearing the end of our time I I always thought of Mark Andreessen as that nice Netscape guy and apparently he had a manifesto the other day I couldn't check because he blocks me on twitter hi Mark I don't know why I don't think he probably uses block party but part of it says it is time once again to raise the technology flag it is time to be techno optimists combine technology and markets and you get what Nick Land has termed the techno capital machine the engine of perpetual material creation growth and abundance our enemy is the ivory tower the know-it-all credentialed expert world view indulging in abstract theories luxury beliefs social engineering disconnected from the real world delusional, unelected unaccountable this is a run on sorry that's I'm playing into his hands playing God with everyone else's lives with total insulation from the consequences there's definitely a lot of views out there from some industry folks I'm just curious if you had a Jeremiah a manifesto a few words what would you want to offer into the discourse as people think about our digital future that's one vision and I'm sure we can even find elements within it to agree upon but anybody want to offer their own couple sentences I'll take a swing at the manifesto the idea that's a very free market manifesto about the need for continual growth and the need from a relaxation of all constraints regulation trust and safety those things I think what I would say is even if you are a free market maximalist like if you are a an arco capitalists the idea of imbuing democratic values in technology platforms is not at odds with your end goal in fact is in service of it because the entire system that exists that allows you to reap the benefit of this market is built upon the traditions of democracy whether that's rule or law participation in the market by all parties like those things need to exist for these platforms who have achieved the dynamic growth that you rely on both for your ethical reasons as well as your financial ones and to prove this you don't want to be a billionaire entrepreneur in China that is a context in which if you run a foul of the state party you can just be disappeared I think that is an important thing for us to remember is that it's not at odds with the growth and success of these platforms that we consider things like how to imbue them with democratic values how to make them more safe instead it's what allows the dynamic innovation economy that America has pioneered to exist got it thank you I feel like I should have been taking notes it's very good anyone else Tracy I think everyone deserves the right to be able to participate in a healthy safe digital ecosystem and partake in digital society I think we can be a lot more ambitious in public private partnerships the government to achieve really audacious goals around internet governance in the way that we did the space mission and covid vaccines it's an alignment of government with private industry to solve the really difficult problems got it thank you you well I tried to read the manifesto it's about 400 odd sentences that all start with we believe I had cloud summarize it it's a statement of doctrine and I guess my response to the manifesto is a counter manifesto which is simple we believe statements don't work for internet governance if you think there is an obviously right answer to any content moderation question to any governance issue there isn't you have only bad options your decision is bad and your task is to figure out what the least bad one is given your goals and constraints and threat models and it's naive to believe or to expect that there is a single universally right answer sorry Netscape guy kasha yeah so something that I definitely saw when I was working in government people would come and say we'd like you to make an app and we'd say okay we'll take the under consideration but like why don't you tell us about the problem because often technology is not the solution it's a people problem or it's a process problem or it's a culture problem or something like that so I guess I don't like actually really believe in manifesto but you're forcing us to create some so I guess the line would be something like technology might not be the solution people fix problems not technology fair enough I love the old I'm not much for manifestos but then you like drop mic so the concrete over the vague action over complacency imagination over status co-ism both literally the status quo and thinking there is but one inevitable trajectory that we're on and it's just a question of whether we close our eyes as the roller coaster rushes up the hill these are the things we want to elicit and cultivate in just one corner of this analog and digital universe as we start this lab we have two further sets of discussions that follow really hope you'll stick around for them this is the most secure environment you've been in a while and we have our faculty coming up we're going to hear about all sorts of ways in which concrete over vague imagination over status quo action over complacency our faculty have been doing extraordinary things just I see LaTanya Sweeney right there and her public interest work in the way she is especially for the students here helping think about public interest and get people involved in public interest technology and then a follow on panel with Nibia Sayed Nancy Gibbs and Martha Minow from the state of social media right now and what's next really hope you'll stick around for this and I just I am so grateful we are so grateful to our panelists for your contributions today for just such a sprightly conversation with like eyes open towards the realities and the negative stuff but also a determination to see what it would be like to build something that works better for everyone so thank you all so much so grateful please welcome to the stage James Mickens Larry Lessig LaTanya Sweeney and DJ Patil say bar the doors okay well for those that don't know me I'm DJ Patil I'm the former chief data scientist general partner at Great Point Ventures and what we have here today is a remarkable panel to talk about where we're going on a number of these critical issues of how we are going to carry this work forward of the internet and how to think about the future fundamentally our three panelists just a quick overview of them it's in no particular order here James Mickens who is faculty director of the applied social media lab and is on the board of the Berkman client center for internet and society he's also associate professor of computer science at Harvard University some of the very special things that he focuses on are the performance security robustness of large distributed web services and a lot about how the fundamentals of the internet works we have LaTanya Sweeney who's been heavily influential on my own work in policy the Daniel Paul professor of practice of government and technology at the Harvard Kennedy school and in the Harvard faculty of arts and sciences one of the things that is really insightful that she brings to the table is she's a former chief technology officer of the U.S. federal trade commission and she's also if you've ever thought about HIPAA and you've heard about HIPAA HIPAA is fundamentally based on the work that LaTanya did Larry Lessig is the Roy L. Furman professor of law and leadership at Harvard school he's taught at Stanford he founded the internet and society center there as well and also at the University of Chicago in particularly he clerked for judge Richard Posner at the Seventh Circuit Appeals Court and also Justice Anton Scalia at the Supreme Court he's the founder of an incredible number of things including the founder of Equal Citizens and founding board member of Creative Commons when anytime you see that logo of CC and he's written in extensively on a number of books which are some of the must read books in the space around how to think about the formation of the internet the contract systems that we adhere to and I've just been a long time follower of his work so maybe to kick things off James I want to start with you and really see this up around the back drop of what we're seeing happen right now you know this next week is the launch of chat gbt which is the fastest growth product ever that's happened you know we have every students who's applying to college using this technology we have every student using it in some different way universities are trying to figure out we have the conflict in the Middle East happening right now that we have to address and the role of social media vaccine hesitancy the surgeon general has weighed in on the issues that are being heard from from parents physicians we have an election coming up and so you're as this leader of the new institute for rebooting social media you've got this three year research initiative to address media's urgent problems misinformation disinfo all these things together how are you going to bring together the different groups to tackle this problem that's a long list of problems to solve I was going to lean on the president here and ask him for some inspiration but you know I'll try to channel him there are a lot of problems that you just listed there and I think what's interesting about that list that you just gave out is that it really calls out the fact that technology is incredibly pervasive in our lives and that it's true that technology isn't always the solution to problems but it is often times adjacent to and involved in what those problems are and so I think we're actually in a really unique moment in time right now where even though different people may disagree over what the fixes are technologically speaking people know that something is not right both the left and the right and various political spectrums various personal backgrounds I think that one thing that we want to do with this new lab that we're creating is we want to make sure that the solutions that we try to come up with to the extent that they are technical are actually grounded in good science and good engineering that we're actually getting software developers, HCI people folks like that statisticians involved in the work that we're doing to make sure that when we find a problem and say oh this should be fixed in some way the solutions that we're coming up with are actually feasible from a technical perspective I'm sure that a lot of people on the stage or in the audience have heard someone who's very well intentioned in the policy sphere kind of akin to what Kasha was saying where they say oh there's a problem let's just build an app we'll just make a website we'll just make a large language model and that should probably fix everything but it's not that simple you need to have not only the room to make sure that the solutions that we're coming up with are actually technically grounded in solving the actual problems that we want them to solve. Well one thing that seems that you're doing very unique is you're bringing builders into this process all the panelists that we had before they're all builders of these technologies and so how are you going to convince people to leave their big salary jobs to come into academia to work on this and then how are you going to focus that to actually create tangible solutions what does that look like I plan to recruit mainly using personal charisma and blackmail that's the top 10 kidding of course in case any members of law enforcement are out there a thing that I think is interesting about this moment as I was mentioning before is that there's a lot of feeling out there even amongst engineering people even amongst developers that something is not quite right with the way that these big tech companies are operating and in fact what these small tech companies are doing and so I think that even if you'd asked me that question let's say five years ago I would have been less optimistic about our ability to bring in really talented folks from industry to work on tech for the public good but already we've been hearing from a lot of people both from seasoned technologists and also students many of whom are in the audience saying there has to be a better way to build these products there has to be a better way to think about who technology is serving and how we should build it to sort of center the public good so I'm actually not that worried about being able to recruit people the thing that's sort of the biggest challenge to me is figuring out what exactly should we do in a way that is centered on the public good that listens to the needs of real people including those who have previously been disenfranchised or ignored by technology how do we figure those things out and I think if we can answer those questions then I think the tech talent will be there to help us find the solutions well LaTanya turning to you you've also been on the regulator side helping out the FTC and so I'm curious especially as a person that's kind of created I mean you help be the lead of this this entire field of public interest tech what do we think that the regulators need in this moment and the efforts that you are really championing here to help on that side from the government of the executive branch for what they need to do yeah I think one of the ironies from the first panel is the manifesto that the guy from Netscape put forward is actually how we've been operating for the last two decades and that the people who would normally help us the regulators the journalists the civil society organizations have given technology a free pass have just not been engaged at all and part of that when I was at the Federal Trade Commission that really stuck out at me is we had amazing ways of finding deceptive ad practices and monopolies and so forth if you were a brick and mortar building but they didn't have any way of taking those efficient and effective techniques and applying them online and part of what I did there was basically building labs and building tools so that they could learn how to do their job better because they could now do it online as we now look at how much of our lives are online or have this technological component if you think of everyone who would normally regulate us or regulate those areas or our laws all of them are currently up for grabs by what technology design allows or doesn't allow I have a 15 year old son and when he was younger he and I got in a big debate about what is free speech which parents will do but he went on to talk about what he viewed as America's free speech but it was Twitter's view of free speech which wasn't America's view of free speech and I was fascinated by that because America's view of free speech really gives space for the underdog for the voice who would otherwise be drowned out to still speak whereas Twitter's notion of free speech where the crowd is freed to drown you out and intimidate you even offline what I found more disturbing though is when I surveyed students at Harvard how many of the undergraduates had a view of free speech that was similar to my sons or similar to Twitter's and it begins to help you understand how if FTC can't enforce price discrimination or the Civil Rights Act or any of these other kinds of laws that we have because they happen online it makes you understand how it is how ineffective and how much freedom they've had and the question is how do we shore up and let those who would help us how built in technology and mechanisms for them to do their jobs riffing off of that you had a big announcement yesterday and so I want to hold space for that announcement because it dovetails around this concept of not only free speech but also what is actually happening inside the platforms as they try to figure this out so could you talk about that yeah yesterday many people may recall Francis Halgan was the head of civic integrity for Facebook and she leaked about a thousand documents from inside of Facebook which were collectively known as the Facebook files we were able to get a copy of those and we have worked hard over the last year or so to solve all kinds of privacy and security issues and we just made them public yesterday fbrcive.org just give a shout out for it but the reason we made it public and the reason we took on that task is actually it isn't about Facebook it's about all of these platforms what's going on behind the scenes with respect to content moderation what's going on behind the scenes with disinformation what they know and don't really talk about outside and you realize that those problems aren't Facebook's problems they're across the board we just don't know how to build trust at scale we just have no idea how to do content moderation forget a list or so forth which about is the level of technology that they have Facebook it's international and in those documents we had 20 different languages that had to be translated and the moderators for those lists don't have any idea what's going on in those countries and rely on Google translate to tell them whether or not to allow the content or not I mean the number of issues are huge I think the goal of releasing those documents in five years if we can provide coherent answers to these issues whether it's new technology policy and just knowledge and insight that in five years can we enjoy the benefits of social media without the harms having been the builder of that boring social network that people were referring to before where we send you lots of updates about your LinkedIn connections it's one of the and being also responsible for trust and safety there it really resonates with me because of the complexity of what it takes to actually build these things and figure it out and navigate it and then also not having anybody you can really talk to in fairness and so Larry I want to turn it to you because you know in the beginning of the internet as we heard it was like it's about discourse the conversation it was about finding ways to engage you know it was the utopian version of that and it feels like we've gone through several stages if not stages of grief as we've gone through this but you've been at the forefront of all of these transformations and I'm wondering what is the modern version of where we stand today on this discourse and deliberation and what does it need to look like yeah so the conversation today so far has been a lot about how we change the internet I think this lab is also going to think about how we begin to change democracy because I think there's an urgent need to rethink what we imagine democracy is right now democracy for us is a bunch of elections and a bunch of you know clowns in congress and that's our conception of what we should be doing and if that's the conception of democracy we're toast I sometimes like to think of it like you're the captain of the titanic you've just hit the iceberg you step out onto the deck and you see all the overturned tables and you think okay we can fix this and then your crew comes to you and tells you that there's a gash and there are six of the units that are filling and you realize it doesn't matter whether we fix this the gashed hall means we're going down and then you've got to convince people we're going to dive into lifeboats which in the middle of the Atlantic in the middle of the winter in the middle of the night on the titanic is not an easy task to do and so the analogy here is like I've been for 17 years working on how do we fix our democracy like money and politics, gerrymandering and that's the overturned tables we know what we would need to do to get us maybe for the first time but get us a representative democracy but there's a gash in the hall that in some sense means it doesn't really matter because we are under such a threat that even fixing this won't fix the democracy and the gash is AI broadly conceived I don't mean chat GBT I mean artificial intelligence which corporations were the most important first artificial intelligence that began to muck up democracy but then add the first contact with AI social media which mucked up democracy for totally different reasons but in really profound ways and then second contact chat GPT AI all this sort of stuff which we will see in 2024 in a really profound way I think we need to realize there are entities whose purpose is not to make us a healthy democracy who actually have more power over our democracy than we do right now and that in a certain sense we have to find the lifeboats to move us into a safe place where we can do democracy without being mucked up by these really powerful forces I think deliberation is a central part of that so one of the things that we've just determined to do is to enable distributed scalable extremely cheap deliberation for anybody around the world that wants to build it inside of their community and their college and their game whatever we've about to acquire I'm not sure where we are on this but about to acquire a really powerful deliberation platform that solved a lot of the really hard problems of deliberation we will open source it immediately we will then begin to build an opportunity for people to embed it inside of their own infrastructure an API for deliberation you're in the middle of a game, push a button and you're inside of a healthy space for deliberating on your problems small groups that aggregate quickly because think of it as the kind of google docs for deliberation like very cheap, easy accessible and powerful to enable people to engage in this practice because I think if we don't learn how to talk to each other again and to listen to each other in a safe and healthy way we're never going to have faith in democracy again we're never going to have a conception that there's a reason for us to turn over to ordinary people the project of like making a choice and so this is one way I think to begin to build a different conception of democracy which I hope will include things like citizen assemblies making really important decisions about what local communities should be doing maybe even what the nation should be doing but contexts where AI is not going to invade and pollute and corrupt and distort what democracy could be maybe take us a little further along that dimension of what does that look like specifically right now because we are fraught as a country across these questions around deliberation we were calling it filter bubbles before I mean you coined a bunch of these the kind of key terms in this area but also we could take abortion as an issue we could take the current issues happening right now in the Middle East we could take a whole slew of other vaccines and what we've seen also evidence is that those people to certain problems they actually also get more entrenched in their view or they get more potentially radicalized and so how does this start to how do we actually get this rolled out and executed on right so there are two important points here number one it depends on how you design the mix of people in the deliberation and number two it really importantly depends on the type of topics that you at least begin with so you wouldn't begin with abortion you wouldn't begin with the extraordinary horrors in the Middle East right now you wouldn't begin there what you want to begin with is projects that give people the experience of actually recognizing they can come to some understanding to build that muscle so we had a when the first version of this deliberations platform deliberations.us we had a deliberation around the electoral college and we were able to produce through thousands of people participating consensus around reforms for the electoral college that were cross-partisan but very different from people's attitudes when they walked into the virtual deliberation room like they changed their ideas changed and they changed because they heard other people like them they realized the other side was not a bunch of lizards the other side were actually people with similar values and hopes and dreams for their kids and so I think that the strategy is to leverage the capacity we have to make sure we can build healthy environments safe environments and then the muscle that gets built by experiencing and exercising that in that environment and that's the idea spread it everywhere in this healthy way and begin to demonstrate that in fact it's not true that every group sitting down and deliberating turns out to be a polarized mess if that's true that that's what's produced it's because it hasn't been architected in a careful enough way upfront. I wanted to ask this question if you had the key you know you're here at Harvard you're training some of the preeminent minds the future people are going to build these platforms if you imagine that you could take the leaders of these platforms the Mark Zuckerbergs the Jack Dorsey the others pick your favorite genre of people and put them back into your classrooms and so that they would have the ethos of knowing what's coming ahead not only on the platforms with misinformation but the potential of what AI is going to do what are the key things that you would want to instill into their knowledge base and understanding so that they could go forth and we'd feel confident that they are going to be good stewards of these platforms and maybe we'll just go across this way starting with you LaTanya yeah well actually we do have an army of such students that we have a concentration area in government called technology science and we've had hundreds of graduates and we teach them to be public interest technologists and they've gone on and done amazing things laws have changed regulations have changed business practices have changed because of the work they do what do we teach them we teach them how to identify technology society clashes and if they're working at the point where technology is being created their goal is to do the hard thing the hard thing is how do I look at how this technology goes wrong who is it what I'm building for it doesn't work it just won't work for them and what is it that I can do in the design of the technology to combat that problem this has been very powerful as students have gone on because a lot of times the clashes between technology and society we see that we've talked about they come we see them when the product is in the marketplace but a lot of times they could have been solved so easily in design and then opportunity is lost once it's commercialized but if you can catch it during design you can modify it it's not an easy task because already it's hard to produce some new technology that doesn't exist already so you're asking them to make their work harder but if they do that they can bring to bear technology that is going to harmonize more with technology James? Well if I could get some of these tech titan CEOs in the classroom I'd basically force them to take a little arts curriculum I mean the reason I say that is because this is changing with the younger generation of engineers which is heartening but I think there is still a tendency among some engineers to say I just want to do the maths as our British friends would say you know and they say well the implications that's for someone with a different job title than engineer or technologist and that's just completely false we no longer have the luxury of thinking like that you know and so when I think about artificial intelligence and when biases are encoded in training sets think about image recognition algorithms that can't classify women or people of color things like this how did that happen these weren't intrinsically technological failures there are failures in many cases sort of empathy or moral imagination and so I think that it's really important when we talk about you know how are we going to train engineers the next generation of engineers we have to think about well it's not just about the science and the engineering it's also about understanding how your products will be embedded in the outside world because it is a position of great privilege to be able to say I can just build things and then you know sort of not worry about some of these externalities that's a position of great privilege and so technology is amazing just to be clear I am a technologist I remember when I was a kid I would see stuff in the science fiction movies that are true now it used to be on Star Trek you could talk in one language and then like get translated to Klingon whatever that is sort of objectively amazing but it's also objectively disturbing when we see for example algorithms being used to encode pre-existing societal biases in terms of who gets mortgages who gets sentence at parole hearings and things like that so I really think there is a big issue of increasing awareness among engineers that you're not just making widgets for the shareholders you are making some key foundational pillars of what we hope will be a healthy society and you have to think about that explicitly you're not just going to automatically fall into the right thing that just so happens to serve the public good I'll just share real quick before I get a Larry one of the greatest ass kickings I had from President Obama and since he's not here I could share it is we were building a precision medicine initiative and he had said you need to make sure that people are going to be impacted are at the table and so we did a little bit more of the classic thing that we do in Silicon Valley we got groups that represented those population we created personas we had the rare disease network and etc and we came back to him and he said do you have the people at the table and he said yes sir here's what we've done and he said I thought I was clear and when the president says something like I thought I was clear your day's going to suck it's not going to go well and we went back and what we realized is one of the fundamental flaws as builders of technology is we use personas and personas are the equivalent of like going to a photo frame shop place and you see all the people in the photo frames smiling and their eyes are open and you're like that doesn't happen in real photos and when we got the real people around the table it fundamentally changed the way we built and it was a really important eye opening lesson that I wish I had had substantially earlier in my career but I'd love to hear your take if you could transport all these people into your classroom I think we have to build a discipline to challenge happy thinking you know when you listen to these technologists talk about what they were going to build and I remember you know 20 years ago in Silicon Valley listening to them talk about what they were going to be building there was all this happy thought about I was going to be the best of all it was going to be extremely profitable and make society a wonderful place rather than recognizing the deep conflict that often exists between the business model and the social objectives and I think the best place to see that is in fact in Latanya's archive Facebook files I had the honor of representing Francis Halgan when she first came out in the steps that led her I didn't do the training but the people who helped her stand up turned her into an extraordinary spokesperson for the tension that she had experienced inside of Silicon Valley but if you look at the Facebook files you will see all sorts of examples of engineers good, serious engineers raising their hand and saying hey we should do this and this and this to make this platform safe or we can't do that because it's going to lead to all sorts of bad consequences again and again they were doing the moral ethical thing for that platform they were raising that issue and again and again they were overruled by the business model overruled by people who said no no no our objective is to maximize engagement that's what we got to do that's what Wall Street says we have to do and that's what we're going to do and I just wonder how many times there were these engineers who at a certain point realized their whole presumption about what their life was going to be was false their whole presumption that they were going to go out and do good was contingent upon good being consistent with a business model and the reality is that's not happening the things that we need in society we need news that is trying to help us understand the world not that's trying to maximize the amount of time that you're going to be spending going down rabbit holes about all sorts of crazy stuff but the problem is the business model of the people who are providing us news right now is trying to figure out how to get you to go down a rabbit hole and to spend all your time looking at all these crazy stuff rather than helping you see the issues and all their complexity and how you're supposed to be dealing with that and if we don't so Martha Minow's father was one of the most important people in the arc of American news development when the head of the FCC in the beginning of the 1960s remarked about the vast wasteland that was television it's kind of hard to imagine compared to where we are right now but his speech triggered an extraordinary rethinking of what news would be and it led to a period of 25 years of a really important ability of us to understand the world around us not completely like there were race issues that were not discussed poverty issues that were not discussed sexual orientation not even an issue according to that view of the world but still it helped us understand because the FCC basically said your business model is not going to worry about returns from telling the story about the world you have got to make that part of what you do we can't do that right now what's the role of the this is a preeminent place for business school law school we've had lawyers who've come out of Harvard who kind of represented tobacco or junk food that's kind of a similar maybe not an imperfect analogy we have people come out of the business school who go to become venture capitalist who are the people who can overrule the engineers as product managers or are going to go start companies what's the role for them in those parts of the institution here so I think one really important change that should happen tomorrow is that engineers should begin to have an ethical obligation that's imposed upon them as engineers in the same way that lawyers do in the context of the Trump cases there are many many examples where the lawyers would not repeat what Donald Trump was saying because they had an ethical obligation and they knew that they could be punished for saying false things the way Donald Trump was saying them I think if engineers inside of Google or Facebook or if there are engineers in Twitter anymore I don't know but if these engineers had the ability to say you know I just can't do that because that conflicts with my ethical obligation as an engineer and you can't tell me to do that because that's illegal given that this is my ethical obligation we could begin to put meat or power behind the idea of ethical constraints operating in the context of technology and that just doesn't exist right now did you want to comment here yeah I did I want to say that also the obligation so when we think about public interest technology we purposely say technologists we don't say engineer we don't say computer scientist we say technologists because it turns out that these technology society clashes can be seen anywhere among the technology life cycle and who can intervene who has the power to make the decisions changes as you go through the life cycle only in the very beginning is it the engineer or the computer scientist but then somebody's got to figure out how to make money on it and so now the business package comes in but if the person who's crafting the business package also has this same eye out for what are the technology society clashes and we give them tools of how to look for them and how to resolve them then we come out with a business case that doesn't have the clash it gets into the marketplace then we need regulators we need policymakers and others who know how to do their job using those same kinds of this is the clash what are the tools that I can bring to bear on it and so forth so what we have found over the years is that actually we reach out to disciplines around the school and some of the most amazing work have not been we've had the amazing work come from computer science and engineers and statistics students but also from history of science from psychology and so forth who've just made huge changes these students their work has made huge changes in all of our lives they've changed practices around prices and so forth new laws and what have you that have just dramatically improved the fabric of how we live the problem is one school can't do it all even if you reach out to all of the disciplines around the school and so that's why we have the public interest tech network which is like now 80 schools trying to do the same thing it's a great point that you raise is I went around over this last year and I interviewed some of the seminal data scientists this is all be released free to the public on LinkedIn and the common thread that they all said is essential for a data scientist is liberal arts training they said that's the only place you're going to get these skills currently and they need to expand that you also brought up this interest which I think is a appropriate word is clash we have these clashes you know as we're entering this the middle of the third industrial revolution we've had clashes on privacy we've had it on other areas social media our views we're entering AI now what is this clash going to look like do we need a czar at the federal government level where it is because at the privacy level it's unclear exactly who owns this it's sort of a hodgepodge and so I would love each of your takes me James starting with you would be what are what should we be doing as we enter this next clash well it's hard to figure out what the right answer is in the same sense that when we were talking about moderation in the last panel really there's never a perfect moderation strategy instead there's a series of decisions you can make all of which have badness in some aspect and you have to pick the one that you think is best based on the context I mean even though I've been setting up here and criticizing technology and old man get your football off my lawn type thing I think technology has a lot of promise I think it is also possible to over-regulate industries as well and I think it's important for people to think about because sometimes you hear policy suggestions that are very invasive in terms of like how companies can move forward and releasing new technologies so on and so forth that being said I think there are interesting analogies to for example the environmental protection reviews that you have to go through if you want to build buildings and you know certain states or certain locations whereby you're basically required by the government to do some form of due diligence right before you go out and possibly release something into the world that may cause any number of harms and so I think that there have been a lot of interesting proposals in the AI space around things like red teaming and things like that basically getting people to go in and essentially try to attack the model try to trick it in saying things that are racist or sexist or so on and so forth and give those results back to the model creators so I think things like that are great ideas but I don't think that anything that any of us would come up with today on this panel is going to be perfect I think the big thing that we have to try to encourage industry to think about is that they should be thinking about these things and that we have to work together to try to figure out what works well what does not so instead of just saying like we're just going to pass you know one big bill that hopefully is just going to be the end all and be all AI regulation let's say I think that we should do a couple things we should be bringing together not only people from government the regulators but also people from industry academics that's one of the things that we want to do here in this new lab to bring together those people from across sectors to start wargaming some of this stuff and to be honest I'm not quite sure what the best approach will end up looking like but I do know that the only way we're going to get to a better state is if we include voices from a lot of people not only just but also social scientists regulators and importantly users regular people who will be impacted by these technologies we only have three minutes left Larry I want to go to you and then Latanya and then finally do a quick 30 second wrap up is we're looking six months out post-election of 2024 what are the key things that we want to make sure if we're looking back in retrospect that we would have started today well I think that the threat the AI the foreign AI threat is huge I mean we're going to see the first round in the Taiwanese elections in January where the Chinese will deploy AI against those elections and they have a very sophisticated defense system Audrey Tang has been really powerfully effective in building that defense but it's not clear what will happen and that's just the first round they'll then come to the United States in 2024 and not just a warm-up act sorry that's a warm-up training yeah that's a warm-up training act so what we need to do is look at what happened there and then scale it up you know orders of magnitude to protect us in 2024 because it's not just going to be the consequence of Facebook trying to maximize engagement you know that's bad enough it's also going to be intentional foreign actors eager to screw this up in a really dramatic way and we're totally vulnerable to that I mean we have not stood up a fraction of what we need to protect ourselves against it whose job should that be is that the president at the end of the day is it CISA is it congress whose job is well if it's congress we're in real trouble that's why I was going to go it's defense I mean you know kind of think you know we used to have a defense department then 9-11 happened and discovered it didn't really defend us then we set up Homeland Security and and that does a bunch of things I think there's like a need for a digital defense department I was talking to you about I want to teach a course in the next fall it's going to be called digital defense department what do we have to build to be able to defend ourselves against the range of threats that we understand the internet has introduced not just foreign threats domestic threats fraud and all the insecurities that are built and the intuition about how to do that requires understanding the relationship between technology and policy the kind of thing that your students I think have intuitively Latanya? yeah I know I have a very few seconds here left I think six months after the election we'll be we'll wish we have gotten the energy and the collective will together to really earnestly tackle this problem this is not the first wave in this technology society clashes or the second of the third privacy was the first wave and we haven't solved it but now we're literally at the brink of huge huge disaster going forward in 2016 my students were first we were first to find these persona bots on Twitter they look like real people they acted like real people they only had a few followers but all of their followers were human and later we found out these bots were put there by state actors now everyone anyone can do the same exact thing through generative AI they can make impressive websites and so forth that look like news websites so when your followers click the link they get reinforced they can choose keywords that when you google that phrase it's worded that way the only the false information comes up as the first hits when you realize how easy that is to do at scale it means that we have a real problem how do we know what to trust how do we not get information just home in on us and lastly that the AI models are only about us they're about American public so it's not like we could turn around and use the same kind of approach on another country 15 seconds each what do you want the public to do why do I always get these hard questions first let's see I think the main thing that the public can do is learn, educate and become more empathetic I think the last part is particularly for technologists who haven't always traditionally been as empathetic as they should have been to all the people who are affected by their technologies Larry and then Latanya slow down I'm a big believer in the slow democracy movement like the slow food movement but start understanding the world not through twitter slash x or facebook or these fast media sources start listening to podcasts to long form journalism to efforts to make the complex understandable Latanya I would say engage people are trying to come up with alternatives and other ways of doing things and just part of the way we got here was blinders just believing in the shiny new thing and just running towards it without paying attention to where we're clear signals all along I don't think that's the way forward we want you to enjoy the new technologies I got my aura ring and so forth but at the same time we have to be aware that I don't know all the places my aura data goes for example please join me in thanking our panel Latanya Sweeney James Mickens Larry Lessig and we're going to have a panel that is going to follow on on top of this great effort in just a minute please welcome to the stage Nancy Gibbs Nabiya Syed and Martha Minow let's try good afternoon folks empty coffee cups full bladders but you're still here if you are at all interested in how our information ecosystem has become what it is then you are going to love the next 30 minutes of your life I am so thrilled to be here with two of my personal heroes on my left I have Nancy Gibbs who is the director of the Shorenstein Center and the Edward Murrow professor of practice here at Harvard and on my right I have Martha Minow who is the 300 year anniversary professor at Harvard Law School also former dean which means that we have a perspective of what's been happening with media what's been happening in the law converging in the thing that we call our information ecosystem I don't know about you but I don't look at our information ecosystem and think that it's working perfectly think that I have notes I have comments but just to set the table a little bit when we think about our information ecosystem we're talking about where we get information where we come across expertise where we get context for what's happening in the world and for many many years journalism media institutions were the ones that were the gatekeepers for a lot of that service information who would surface context they weren't the only ones but they really captured our attention in doing that the promise of the internet the question that Jay-Z began the day with the thing that made people excited was that there were gatekeepers who had emerged in doing that kind of work and what the internet provided was an opportunity to remove some of those gatekeepers get perspectives of people who had not had a seat at the table to provide more information more context, more expertise and that seemed great that's when I came online and I was very excited about it but here I am in 2023 I think that dream is maybe a little bit of a nightmare sometimes so Nancy I want to turn to you and talk about sort of the role of these historical gatekeepers of the folks of the sort of the media and what's happening with journalism we've talked a lot about news deserts about how so many are underserved how much information needs at the markup so hi I'm Nabi Hasey them the CEO of the markup I didn't introduce myself I was too excited to be here with them we talk about news mirages areas online that look like you're getting information but it's not verified it hasn't gone through any of the standards and processes we'd appreciate in journalism and so I'd love it if you could just talk to us a little bit frame for us how technology sort of radically changed not social media we'll get there the media space well if you if you start with the idea that however imperfectly the press traditionally defined had a constitutionally protected role for a reason which was a belief that some sort of independent accountability function was essential for a free society and however imperfectly that role was performed I think that was a very important idea and however narrow and unrepresentative those gatekeepers were we saw especially through the 20th century the development of standards and rigor and practices and ethics around performing that role which the arrival and the power of the platforms made dramatically harder and I would say for three main reasons one as Larry pointed to they completely blew up the business model and it wasn't just that I've all started moving from print newspapers and magazines to digital it was that advertising moved not from print to digital so much as print to just the platforms so that fairly quickly just Google and Facebook were soaking up about two thirds or more of the advertising dollars and every news organization of any kind was left to divide up the rest that meant a lot of things for one thing performing that basic accountability function costs money it takes resources it takes boots on the ground who are going out and reporting the stories and finding things out you know what costs a lot less than doing that? having an opinion there were many reasons why even as opinion tended to drive a lot of subscriptions and engagement which was profitable it was also cheaper than the hardcore news reporting especially in the places we need at most especially in war zones which is dangerous and expensive and hard to do and you know what audiences aren't often as interested so you have all these downsides that go with the important public interest focused kind of information creation against the cheaper easier more engagement driving and so to the extent that these trends some of these reflect human nature and what people care about but the fact that resources were drained away from that core function that is the reason the independent press was protected in the first place the platforms had a lot to do with that and then of course the larger context of that is the extent to which they drove an attention economy I mean those gatekeepers had the luxury if you were the anchor of the CBS Evening News or the editor of Time Magazine or the New York Times Time competed with Newsweek the New York Times competed with the Washington Post CBS competed with NBC and ABC pretty easy field of competition you were not competing not just with thousands of networks and Netflix and the streamers and Fortnite and video games and every single influencer and creator on TikTok so even if you were willing to spend the money to do the work to gather the information to serve the public interest good luck getting people to pay attention to you when you had so many other alternatives so there are so many things we could talk about but to me those are some of the core ways that even news organizations that continue to this day to try mightily to perform that essential public interest role or facing headwinds that the platforms have a huge hand in providing wonderfully optimistic I want to take us to another time in history where there was a cacophony of voices right and that is we imagine the golden era of media right this golden era of this institutional journalism where it is the time magazine ABC like the big networks but there was a noisy time before that too and I would love to bring you in Martha and hear a little bit about how government policy sort of helped build the media that we revere and respect and feel that we are losing what role did government policy play in sort of creating that era of golden journalism and probably it was never gold but it was better than now you know the founding fathers of the united states believe that the press was essential it's the only private industry mentioned in the constitution and it's treated as if it existed because it did exist it is a private industry it has been a private industry throughout the 19th century it went through many changes there was a period when the political parties were the major funders of major newspapers so there was also a period of what we call yellow journalism where it was basically about scandal it really wasn't so different than what some people are critical right now but the consolidation of certainly national media but even in regional and local occurred with the rise of technologies and the technologies of telegraph first then radio then television really provided a predicate for government policy government policy to regulate scarcity the scarce access to the airwaves and in so doing actually the united states government believe that there was a public interest duty for anyone who received a license that was true for broadcasting it actually carried over to some degree even with cable when scarcity was no longer the same problem but government also has shaped the entire industry with the use of antitrust policy with the where is competition required where not changing rules about whether the same owner can own the television and the newspaper different periods of time different attitudes about that government is all over it government also has been a major funder in the development context government after all paid for the development of the algorithm behind the internet government paid for the development of public media public broadcasting government creates the tax code that has enormous impact on this entire industry so the government's fingerprints are all over the situation I think it's so powerful to remember the government can set those incentives and not only think of regulation but also think about how we can whack a mole the bad things we live in a time where we observe a lot of harms online and the impulse has been to say well we need to regulate misinformation we have to do something about this and I wonder and this is for both of you but I'll start with Martha about how carefully we should tread when we are talking about how we should live right now with talks about job owning and some of the cases in front of the Supreme Court so the United States is on the extreme end of the entire planet and all of history with a protection of freedom of speech against government action notably against government action not against private party like platform action but with that attitude we should tread carefully we have a Supreme Court of the most extreme version of the extreme version of a libertarian First Amendment that said the traditional activities of the government and for example any trust policy the Supreme Court is upheld the use of any trust the Supreme Court has actually also supported the free speech rights of the private editor of the private moderator so there are lots of roles for the government to actually reinforce a much more positive ecosystem you know I can't help but remember that when there was a violent video game case before the Supreme Court there was only one justice who'd ever seen a video game period I think that there are more justices now who have some knowledge of technology but these are not experts and they've been ducking cases we'll see because I can't quite duck the cases that are now before the court between coming from Florida and Texas efforts to have content regulation I personally think it'd be much more successful for government to go the antitrust way why not why not require that there's competition on the moderating function why where is it written that this is a bundled content bundled activities that do not violate the antitrust laws looks to me like it violates the antitrust law if there was competition around that that's something government could enforce that would make a difference I also would advocate for treading carefully but it's also around as concerned as I am around misinformation as serious this problem it is in our context of First Amendment concerns that in some ways the content that I'm also really concerned about when it comes to the health of democracy doesn't have to do so much with misinformation but the platforms play a really critical role and that is the extent to which I think they distort our understanding of the state of debate in this country we know that something like 87% of people who identify as sort of center right and 77% of people who identify as center left say they never post politics and public issues online like the middle of our public sentiment space just don't want to be in this conversation online and the result is that the people who are talking and posting and tweeting and what we see and hear and what shapes our sense of the state of our political discourse is wildly distorted by more extreme views so we end up thinking the country may be more divided than it actually is that our political opponents are more extreme than they actually are and you know when Larry talks about our need to learn how to talk to each other again like in order for us to succeed at anything to solve any problem to address any of these issues people have to feel like it's okay to express their values, their point of view, their starting point someone can disagree with them and it's not going to destroy you personally or professionally and in an environment where it is so easy to destroy people personally and professionally for an argument they make are we really surprised why people might be reluctant to engage in any of these conversations and why even our technologists who have left private companies are reluctant to speak out about content information practices or trust and safety issues because they don't want to come under the cross hairs too and so that I'm concerned about censorship I'm concerned about self-censorship and I'm concerned about the way this environment we're operating in gives people lots of reasons to self-censor I want to stick with that because I think there's something really important to underscore there which is we so often look to social media as something as oh this is causing the harm, this is causing the ills but in what ways is it like any technology, a reflection of the society that we're in and other mechanisms, other spaces in which we might be bonding with one another and having that sort of connective tissue isn't happening and it's too noisy online to really figure that out and I'd love to hear from both of you and to what extent are we trying to pin this problem on social media when it's a much larger cultural one I think it's hard to separate online experience and offline experience in these ways but to pick up a topic that is near and dear to Martha's heart the news organizations that were hardest hit economically by the rise of the platforms are local news organizations and as it happens they are the most trusted so we can least afford to lose those and they are most missing if you sort of look at the maps of news deserts and the maps of where political power is concentrated in this country it's the same places like you all know because you're students of how the electoral college works and how the senate works that a voter in South Dakota is way more powerful than a voter in California by orders of magnitude and yet that voter in South Dakota is also much less likely to have reliable access to relevant local information and so this is where I feel like if we don't at least start with rebuilding our local news ecology I don't see how we work up to a healthier, broader public square. I couldn't agree more and if you don't have local news we have pretty good research that shows private corruption goes up government corruption goes up accountability goes down so for me this is top of mind how to rebuild local news so I'm the chair of the MacArthur foundation which is part of a coalition of 21 foundations and philanthropists that are trying to invest we've committed half a billion dollars to rebuild local news but in addition I'm interested in initiatives like at MIT the center for constructive communication which is bringing people back to public libraries but also using technology to allow people to hear trusted conversations from people that they don't encounter in their daily lives I mean I think that the Nancy made such an important point the narrative that we have that we are polarized is self determining it's self fulfilling when in fact most people don't like what's going on and most people want rather similar things they want to do so I think that we have a lot of consistency and opportunities for their families so I think that if we could have more the face to face conversations as well as more bedrocks in local communities and regional communities and build back trust I think that would be in my view that's the most important thing to be working on I think the first panel about this desire to have different forms of content moderation the different ability to engage with platform in different ways and while there's some part of me that's very interested in you know devolving that kind of control about your experience I do wonder how that might challenge our ability to have shared realities with one another the useful role of the news whether you liked it or not is that it generated consensus about what the parameters of debate were for something about what was actually happening in the world and it's not clear that anyone's really stepped in to in a centralized way have that role we have small pockets of shared realities rather than consensus and we have a minute and a half left so I ask you a big question for that but I wonder about how you especially as we're talking about the role of technology in helping us create that shared reality is the other work that we should be doing to create that well the problem is larger than the technology we have rival media companies that use even old-fashioned distribution systems that tell completely different stories about what matters and about even if it's cover the same things different versions and this is before we have the deep fakes of trust in the last panel I think that trust actually is not something you can accomplish with a magic wand you have to earn trust and building trust with trusted navigators is going to be I think the only way that we will make it through who are those navigators I don't know but it's I wouldn't look to the government actually as the navigators we actually are launching a new research initiative at Schoenstein around news influencers in the belief that we have to look beyond who considers themselves a journalist to who is actually performing the essential functions of journalists whether they identify as journalists or not and what is fascinating is if you if you ask people whose primary news source is TikTok, Snapchat or Instagram they trust individual influencers on those platforms more than they trust news brands it's not true on Facebook and Twitter where news and institutions are trusted more but on the platforms that younger viewers are tending to use the trust relationship is with individuals and so what I want to study is not just engaging with influencers around sort of journalistic best practices around rigor and fact checking to learn from influencers about building trust about building audience about the connection that they are making because I think addressing these problems is going to take real humility and curiosity and a willingness to expand our definitions of who we're engaging with and who constitutes the the various entities of this ecosystem far beyond what we might have traditionally thought of which feels like the right way to end a day of fascinating discussion and to applaud the launch of a new lab which is very exciting that will help us navigate how we trust institutions, how we trust the technologies that mediate so much of our life and how we ultimately trust one another so thank you everyone for a wonderful day and a wonderful conversation thank you please welcome to the stage director of the Applied Social Media Lab Professor James Mickens thank you thank you alright well let's just give one more final round of applause to our distinguished guests and the great conversations that they set up also I want to give a special shout out to the president who couldn't make it here we hope that he feels better and by the way in a very nice gesture he actually called the green room before a couple minutes before this event to start and he got on speakerphone and in a very nice gesture he said he's sorry that he couldn't show up here so we do hope that he feels better now by the way in that call the only two things that I said were hello Mr. President and feel better soon but in just having a speakerphone call with President Obama hundreds of my relatives can now go directly to heaven ok there's nothing that I will ever personally do that will surpass that I believe in my lifetime except possibly run for political office President Obama did you hear something special in me during that speakerphone call did you see a hint of leadership let's form a coalition of two ok have your people call my people I don't have any people ok I can maybe borrow some of your people that's how you get people you want to get in this building at the ground level it can only go up or stay the same because I don't have people in different ways so I hope that like me you came away from this event I'm feeling both sort of a sense of optimism but also a sense of grounding in the challenges that we're going to face so we heard about a lot of the problems that technology can cause for society we heard about how misinformation can corrupt our democracy we heard about what happens when social media companies like Twitter cut off API access to great tools like block party that help to safeguard of vulnerable users online we also heard about some very positive optimistic things so we heard about for example from Larry Lezick about how we can actually use technology to create online deliberations that strengthen democracy and that bring people together instead of pulling them apart so like I said I came out of this with a renewed sense of cautious optimism I hope that you feel the same way I hope that what we've seen today is that technologists don't always have to just be the problem they can actually be part of the solution and that if they work with other people if they work with other users if they think and reflect about what they're building then we can actually start to build technology that actually serves the public good and I'm going to take that message coalition of two you were here when it started so before we conclude I want to offer thanks to Project Liberty who provided some generous funding for the lab as we're starting to ramp up we look forward to working with Project Liberty and other partners in academia and industry and out in the world to work on this difficult but important challenge of creating technology that works for the social good I also would be remiss if I did not mention the new website that the lab has set up let me read from here to make sure that I get the URL right the URL is HTTPS colon forward slash forward slash asml.cyber.harvard.edu as we can tell only the best and fullest URLs so we've just launched the website there's not a lot up there right now because we're still in the process of spinning up but over the next couple weeks as we start to ramp up as we start to hire you'll see a lot of interesting stuff up there and as a final note to all the technologists the engineers out there who are interested in building for the public good go to the website we've got JD's up we are hiring so with that thanks I hope you have a great day