 We're here today to celebrate the launch of BKC's new social media lab and to do so with a terrific panel on the future of the internet featuring the faculty director of the Berkman Client Center, Professor Jonathan Zittrain, Kasia Shimalinsky, the co-founder of the Data Nutrition Project, UL Roth, former head of trust and safety at Twitter, Tracy Chow, the CEO of Block Party, and Jason Goldman, the chief advisor on technology to President Obama and former chief digital officer at the White House. Thank you. Well, thank you so much, John, thanks to the Obama team and to everybody, it's a slightly different Block Party today than we were expecting. But we have a lab to launch, we're going to launch it right now and can't think of a better group or setting in which to do it. The applied social media lab, ASML, is an effort to increase the breadth of the conversation and the action around imagining a better set of technologies than the ones we have right now. We should just take a poll of the room. How many people would say they are thrilled and delighted with the current state of the internet? I saw like a hand tentatively go up and then go back down. Thank you for holding dissent in the sphere for us. It's so easy to get resigned to it to even have that feel like a kind of complacency. We're hoping together to get beyond it and to include among so many folks who have been in the belly of the beast, who have worked in one form or another within Silicon Valley, who have been both excited and at times bemused, even terrified with what they're doing. Part of the function of the lab is to give a chance to be in a different environment, swim in a slightly different bowl, and build in ways that pure economic incentives of the sort that are on display in my Twitter, sorry, X4U tab, which should I ask how many blue checks are here today? No, let's not do that. No offense if you have a blue check, we'll take $8 too. To think about what other ingredients in the stew can make for something beneficial, uplifting, to capture some of the promise of those early days, and to have this group of folks here to talk about that both then and now and what next, and to have the folks that are in the panels that follow ours, our faculty panels and such immediately following is just such a privilege and we're so glad to be here and we will surely hope that the president will join us again. It's not like these problems are going away to continue the conversation. So with that, let's get into the panel. Dean Manning already introduced Jason. Thank you, Jason, for being here. Let me turn now. Well, actually, let me ask you, Jason, when was, what was your first internet experience? I was, my first internet experience was a 1200 Bod modem in St. Louis, Missouri, to local BBSs. So 1200 Bod was probably, that would be 1995. Yeah, I think even 1994, yeah, and so I was, I grew up in St. Louis, was a big participant in the bulletin board systems of the time from there, kind of discovered IRC, which was like a big community early and then built my first web page, which was very pretentious in 1995. It was like a quote. Did you get the domain name, very pretentious stuck? It should have been called that because it was like a quote in French from like waiting for Godot and like each, each like line in the quote, like linked to a different part of the website. Is it still there? No, I can't find it. It's not even in the web archive. No, it's, I think it was too early for Brewster. So it's like, it's really, it's thankfully lost to the sands of time. And then, yeah, you know, discovered Usenet and other places. And then, you better ask how old you were in 1990, if you would do this. I was like, so I would have been like 16 or something like that, yeah, yeah. And so I ended up working on Blogger, a web blog publishing platform and then worked on that at Google and then was part of the founding team at Twitter and so sort of my, my passion became a career as well. And if there was one word to describe your sensibilities about the internet circa 1995, what would the word be? Portal? It's like always the word that comes to mind. Like it was like the idea that you opened up this magic scrying glass into another person's experience and were able to see the experiences of other people on other parts of the world and understand and hear from people in this like, in their native vernacular in a way that you'd never been able to and growing up in St. Louis, you feel like everything's the same and suddenly seeing people from all over. That's great. I feel like there's just a bumper sticker. Like there's only one letter between scrying and crying. Yeah, that's right. The theme, but we're, we're not just going to be pessimistic. All right. Terrific. Thank you. Kasia Chimlinski, did I pronounce that okay? That's great. I'll take it. You've led a career and a life so far of what might be called digital public service. You've done time. Is that the right way to put it? Google and McKinsey. You've been at the MIT Media Lab where you worked on the team that developed Scratch. You were among the initial cohort of the US Digital Service, stood up within the US government during the Obama administration and with us today, the UN Office of the Coordinator for Humanitarian Affairs. And then you have started a number of projects in the public interest for a better internet. What was your first internet experience? Wow. So I was a really basic kid. I think that basic. Yeah, I said basic. Yeah, I feel like my time when kids that were basic were called basic kids. Not basic the programming language, although that was part of it. I also think that this question is just like a low key way to figure out how old each of us is. That's right. It's like a Buzzfeed quiz. Exactly. That would be the how many Bod modem question. Yeah, yeah. So I think my first experience was really I remember two things. One was chatting with my friends. There was like these little AOL discs that were sent to your house and you like use that and dial up. Like if you don't know what I'm saying, just ignore it all. It's fine. These AOL discs, they were like dropped out of helicopters. There were so many of them. Yeah, they were amazing, right? And they were this kind of portal to get you online. And so I remember chatting with my friends and I remember downloading music and I'm pretty sure in retrospect that it had nothing to do with actually chatting with my friends and music's why I found it so delightful because it chatting. I was really chatting with kids that I was seeing every day. So and I was a very slow typist. So it would have been better for me just to chat in person. But there was something really magical about being able to represent myself. However, choose a username, choose a profile and something about identity there. And then on the music side, it definitely wasn't about the music because it took like a week to get a song. But there was something again, really magical about the notion that hundreds or thousands of people were contributing little pieces to that file. The peer-to-peer community, A.K.A. the pirate crowd. I didn't know it was that when I was that age, but sure, you could call it that. So I think those two things really highlighted to me the power of the internet. I wouldn't have said it as such. It just felt like magic that these things that had no analog in the real world I could do online. Yeah, boil downable to a word the way Jason did with portal. I mean, probably magic. That's how it felt when I was 12 or 13. Yeah. Terrific. You, well, Roth, one of the best line of Yoel's LinkedIn is he was a genius from 2008 to 2011, just like, huh, which he meant literally under that. It said fixed max. So you fixed max is a genius. It was it's my favorite job title by far. And I wish I could say that it was a safer profession than the one that I entered into later, but it totally wasn't. I'll always remember the day when somebody with a broken iPhone approached the genius bar and was so irate about the fact that their phone had broken that they just chucked it at me. Kind of answers the question of how it got broken to begin with. Yeah, I was sort of led down a certain path of understanding this person's relationship with technology. And with their fellow humans. But but yeah, it was it was really this moment throughout college. I worked for Apple fixing people's things and it really shaped how I think about the connection between people and their technology. I was there when the first iPhone was introduced. I remember the sort of immediacy with which people connected with it and the feeling of transformation. And I had a really cool job title. I got to tell people I was a genius. Also interesting to think because that that sort of job has expanded and proliferated of trying to be helpful when other people are in extreme unction and when there's some mess and then there's a crew of people we expect to be smart and have a smile and just clean it up. And as you said that your path included a PhD in communications and then ended up head of trust and safety at Twitter. I made some mistakes along the road. Yeah, I'll leave it. I'll leave it at that. All right, we're going to have to talk more about that. But what was your first internet experience? You know, I was I was all set to talk about the 14 for modem that I remember getting. There was a three calm. It was very exciting. It kind of doesn't make you wonder why the modem people didn't just start at 14 for it. Yes. Um, why did they build up? Yeah. But now that now that I'm not sharing the stage with with President Obama, I get to tell the fun story that I was nervous about telling, um, and I hadn't I hadn't thought about this in about 25 years, but, um, I remember, I remember when there was this new thing called Google and I remember being in the computer lab in middle school and I was really upset that the school district's proxy server blocked Google and I remember thinking, God, this censorship. And so, you know, young YOL starts thinking, what does one do in the face of censorship? And my answer was scamming the admin password out of the person who ran the computer lab so that I could configure the browser social engineering. Absolutely. Yes. Uh, and then, and then I remember the magic of it, right? Like I've been using Lycos and at the time there was this thing dog pile, which was a search engine aggregator. And then all of a sudden there was Google and it felt transformative in that moment and also subversive because I had to social engineer the admin password out of the person who ran the computer lab. Were you caught? No. Well, that person is here today and is ready to. If this panel turns into like the history of each of us confronting the folks that we've done, that would be great. Very good. Tracy Chow, computer science, machine learning and artificial intelligence graduating from Stanford. Can I say the year? In the late aughts, which is to say a time before AI was, you know, everybody's AI now, but you were AI before everybody was AI. That was in the period when they thought neural nets didn't work. And so we didn't study them. Yeah. Yeah. Well, jury's still out, but you were employee number four at Quora. Wow. And a similarly low badge number at Pinterest. And then off to the U.S. digital service, talk about a pivot and inflection point and then block party. Yeah. What was your first internet experience? I'm going to go against the grain here and say I was pretty unimpressed. So my parents were both software engineers. And so my dad had computers everywhere in our house. So he was very excited to get me online. It was prodigy, I think, like the little dial up beep, beep, and then I think there were some games that could, in theory, play with other people around the world. I was like, why do I want to play chess with somebody? I don't know. That's just kind of weird. I'd rather play chess with you. That was my first experience a bit unimpressed, but I think it's maybe characteristic of being still pretty young then and feeling like technology is just in the background is just part of life. It's not something that unusual or different. Yeah, it's amazing how much we can get used to the status quo in new generations. Of course, we grow up in it. It feels so different. What would then your one word be, meh? Underwhelmed. Underwhelmed. And you, well, we didn't give you a chance. What was your one word? I mean, magic, right? Oh yeah, you just, I'm sorry. You can never get past that first, I mean, I guess Tracy never had that first feeling of actually being impressed with the internet. Still waiting, still waiting for you. My feelings have soured somewhat, but there was that early moment of really being enchanted by the thing. Yeah, I suppose there's a regrettable magic as well. And do you want to say just a word about Block Party? Tell people what Block Party is. I started Block Party to build tools against online abuse since I was getting a bunch of abuse on Twitter. It was great. And then Elon took over Twitter and we had to shut those down and now we're building new tools to help people stay safe online. That was an incredibly pithy tweet length description of it. And we were going to want to hear more about that, but let's now turn back to you, Jason. And I don't know if you know what the president's first internet experience is, but I already asked you yours. I guess you were in the Obama administration as chief digital officer. So when was that? I came in at the beginning of 2015. And I was there until the day before inauguration in 2017. So I was there for the last run. If you could enter, I'm now going to use the words that were shared, a portal and through magic, somewhat unimpressively go back to 2016. Right. And there's you, 2016 you. What would you whisper into your ear after reassuring yourself that this is normal? Well, I think, I mean, the big thing that I was working on and the Obama administration was we were standing up a lot of channels that the White House hadn't used before. So like before the Obama administration, there was a website, obviously going back to the Clinton administration, but it was in an active place where you get news. There was never a Twitter account for anyone at the White House. We launched a Twitter account for the president at POTUS while I was there and generally tried to bring the White House and the president to new channels. And I think that was an appropriate focus for that era of the Internet. I think from a reaching people standpoint, the thing that that misses is that the way in which the Internet has evolved has shown that all media is really niche media and that what you need to do to connect with people is find where they're already hanging out online and engage with those audiences there. And I think a lot of what communications generally has pivoted to is instead of launching your own channels, it's like partnering with the people who already reach an audience that you want to talk to. And that's true in politics and that's true in, you know, other contexts as well. Well, that totally makes sense as an answer. And of course, that's an answer in the context of your job, figuring out productive outreach across all channels in the digital space. But you've been thinking really broadly about technology. So maybe I should refine the question to be, is there anybody else that 2023 you would want to go back to in 2016 in the federal government to give a word to the wise, what would that be? I mean, I think the, you know, certainly, I mean, if I could go back to 2016 and be like, listen, the pandemic response, you really need to make sure that that like, you know, you work on that, make sure those plans are well set. So like, I would, you know, prioritize just like put that in an envelope. Yeah, January 2020, exactly. Like, let's just start, let's really think ahead there. And in fairness, they'd, they'd done a lot of that work. But the, you know, the I think the, you know, the thing that brought me into the government was President Obama said to me, you've worked on a lot of systems that have built tremendous shareholder value, but you've never worked in a place that's inherently in a inherently for people and for the end for the public good, that's an invaluable positive sense. And what does it mean to evolve these technologies in a way that's inherently more, more positive and is infused with the values of democratic culture? And so I think that's part of what we tried to do in the administration. I think knowing the challenges that particularly social media would be under in the 2016 through 2020 lens, I would have probably encouraged a much more robust engagement with industry to be like, Hey, there's going to be real challenges that exist in a value space. And we need to try to encourage industry to put a marker down in terms of what values they care about. So let's do a bit of engagement with industry right here. So I'm going to ask you, YOL, if it's okay to imagine being back at Twitter, let's suppose Mark Cuban owns it now or something. And there's going to be engagement between you and Jason about the public is this time traveling, Jason, or are we back in sort of the present day? I feel like it's getting too confused if we're hanging out in 2016. So let's come back to 2023, if that's all right. And I'm curious first, YOL, your sense and what you would recommend to a new CEO of Twitter, or if you came back as CEO, how to infuse the kind of public interest values that Jason was talking about while also having to think about your share price and your shareholders, or I guess Elon doesn't have public shareholders now, but we digress, how to reconcile the public interest stuff with the profit stuff and what the responsibility of the company should be. Curious, your thought on that and then maybe just any dialogue you too would have over that for a moment. You know, I think what we've seen, not just since Elon Musk bought Twitter, but across the history of social media is a profound failure of institutional trust. I think we've kind of built these platforms and they have all sorts of cool features and they make a bunch of money. Twitter never so much with the making money thing, but in theory platforms make a bunch of money. And we haven't built them in a way that engenders public trust, right? There's communities of, in Facebook's case, billions of people on these platforms, but there's no real sense that there is legitimacy to the governance that these platforms are exercising. And Twitter wasn't perfect at this when I worked there. Twitter has never been perfect at it, nor has any other platform. But I think we've seen an even further erosion of that trust in the platform in the chaotic ways that it's been governed since Elon Musk's takeover. And I think the solutions to that are straightforward. We know what builds trust in institutions. We know that that's communication. We know that it's being forthright about how you are governing things. We know it's accountability with data that is externally auditable. And what I would encourage whoever owns Twitter or whoever is building the new Twitter to consider, let's address this to Mark Cuban, you know, think about trust as the object that you are trying to pursue. Think about what you need to do to build a system that people can understand, that they can audit, that they can feel has legitimacy and that they are participants in. And unfortunately, I think all of that is sorely lacking from the Twitter of the present day. And in various measures was sorely lacking for years and years prior as well. Jason? Yeah, I think I think you'll I unsurprisingly agree. I think the the thing that the government would say as well, or you know that President Obama would say as well as that it is is trust. And I think particularly you hit on the concept of the need for transparency of not just I think for too long on these issues, the companies have said, we are the world's experts on these topics. We know more about these problems than anyone else. We've hired all the smartest people to work on this and we've got it. And I think notwithstanding the geniuses that these companies do bring to work on these problems, that's just not going to be a sufficient answer. You're going to have to let other people grade your homework. You're going to let have to let other people look for harm that maybe for profit business isn't incented to look for itself, which is why President Obama and his Stanford speech talked about the Platform Accountability and Transparency Act as like a specific legislative measure that we think at that time would have made sense for for platforms to be forced to do more of this collaborative exercise and sharing what they have. Now the riff that the two of you just had kind of is dwelling in a platform that should have an attentive team, a transparent set of policies, probably a consistent one, and then applies them with some upstream values that themselves are announced. Of course, there may be a lot of folks in 2023 for whom they just work backwards from the outcome. And if somebody is blocked that's on their team, that means it must have been a bad decision, their trust level goes down. And then if it's if something goes through that you think shouldn't have gone through, you feel the same way rather than feeling that the system is good. This is maybe a way to bring you back in, Tracy, because is block party meant to not have a one size fits all? Who were the blockies and what is the party? The party is everyone. So block party is built on a philosophy of user empowerment and allowing individual users to choose what they want to see, what they want to engage with and be in control of their experience online. And so you no longer have to rely on a single central authority, which might be Twitter's moderation team or Facebook's or Instagram's, to make all the determinations of what is acceptable or not and what is what everybody is going to see. But each person gets to choose. And so actually tying back into some of the regulation questions, I think going beyond transparency, one way that the government can help is forcing a level of openness such that additional developers, third parties, can build these solutions that work on behalf of end users. So just map that out a little more for us. This is like, I guess block party is not functioning right now. Not the Twitter products. Our code is all there. We're waiting for, you know, but on a day in which the connections to Twitter are open, how does it work? So you would sign up for block party, configure your preferences. You could say I'm pretty open, like let most things through. I want to see stuff. I want to engage with the internet today. Or you could say I need a break. Please filter things more aggressively for me. I think that's the mode that you all is on. This product, by the way, made Twitter usable for me and for many, many others. And it's truly a profound loss that Tracey and block party can't do the incredible work that they were doing. Thank you, you all, for the testimonial. Save that one for our advertisements. But we would just run in the background, block party run in the background to automatically filter out mute people. How do you all decide? I get that Jason might set the dial one way and you all and Kasha said it somewhere else, but how does it know then how to reflect what the dial is asking for? It turns out there are really simple signals you can pay attention to, such as, does this person have a profile photo or are they an egg? Did they just create their account? And all they do is tweet at people who post on a particular topic. How many followers do they have? If they have zero followers, they're probably somebody you don't need to listen to. So you have a not so secret sauce that uses, it sounds like so far, non-content based ways of finding dodgy characters. Yeah, actually, user behavior and user characteristics can be much more informative and simpler to understand. So we made these auto mute reasons. We called them very visible to end users and they're configurable. So for the people who care, they can understand what's happening underneath the hood, which is actually a very nice property of the system, which we don't always get with machine learning or AI based systems that do some magic underneath the hood and can't be understood. Reminds me back in the days of eBay, some accounts would have sunglasses next to them, which I thought meant they were cool and I would buy stuff and it meant they were shady, because they were new. It's just a kind of an HCI problem, I think we would call it. But those kinds of things let people who decided to use the block party add-on to do it. And that was both, it sounded like an aspiration coming from your own experiences and it was a business. Yes, is a business. Yeah, people paid subscriptions to be able to clean up their experience. Uh-huh. And then it stopped because Twitter basically wanted to charge a bunch of money to let the data flow so that you could have your... Yeah, the pricing was prohibitive. There was essentially no desire for Twitter to allow anybody to continue building on their platform. Uh-huh. Last question on that. I thought that it was also possible for me to indicate friends on Twitter and whenever they blocked somebody, I automatically blocked them. Was that part of the deal too? We allowed people to share their block list and block and mass. So another way that you could use the tool was block all the people who liked to tweet or retweeted tweet, especially if they were saying something nasty about you, a very efficient way to find those people and block all of them. Fascinating, which might mean people would encounter accounts they haven't interacted with and find themselves blocked and there wouldn't be for them an explanation. It would just be, thank you for playing. Got it. All right. I mean, you're getting out, you're getting out like an interesting point. I think when you ask about sort of the work that they're doing, I mean, you've written about this before in terms of community based moderation and that is like another way forward. And so I think part of the question you're getting at is what would be the things that government, industry and others can do to encourage like the commons that would allow community based moderation to exist such that shutting off block party was prohibitive in some way. If not legally, then like some on industry norm, it was, you know, it wouldn't be allowed. Like what are the ways in which we would sort of protect those commons and enable that kind of innovation? Because I agree with you in the arguments you've made about this, that this represents a positive direction and a positive way forward that takes us out of the dichotomy of government censor speech or one person who owns a platform censor speech. And so I think that the next level question is what needs to change so that what happened with block party is anomalous and not just the way in which the single owners would react? Well, that I was gonna say that tease up a great question for Kasha. Because you think about the overall ecosystem. Is the vision that Jason described in that Tracy's work represents? Is that to your eye a good vision? And is it attainable? What would it take to get from where we are to where that could happen? Yeah, I mean, I share the vision. I think it's not surprising. We're all up here for a reason. I think that it really comes down to enabling meaningful choice for consumers. So the work that we do at the data nutrition project is we build nutrition labels for data sets. The analogy there is essentially if you walk into a store and someone says you should eat this food, it's really good for you and you say what's inside and they say I can't tell you. But you should eat it. It's really good for you. We as consumers now have been trained to expect that we can have access to that information, right? And that we can turn around the package of cookies and we can look at it and we can say is this something that I want to eat and also if I have allergies is it something that I can choose not to eat that someone else might choose to eat, right? And this kind of gets to your question or your feature of user selection and preferences. I might choose something different from you. That's fine. So I think the question of how do we enable that is a really is a really good one and it comes down to me this notion of digital public infrastructure. So what are the kinds of things that we should expect of our tech companies and platforms and solutions that are kind of similar to infrastructure that we have in the physical world? The roads and the sewers and the water and the electricity and things that we might at least in some areas of the world take for granted that someone is making sure that those things are safe and available for us. Those same kinds of things should be available for our technical platforms and our solutions. And if we're really going to hold to that metaphor, who does build that stuff? Is it still companies that build them? Is it government that builds them? I think it kind of depends on what you're talking about. Yeah. I imagine that some things you'll want to have more control over if not in the build than at least in the regulation. Some things might be public-private partnerships. And maybe there's some component. Honestly, I believe that a lot of what government can do are set the protocols and standards and expectations of what then the technical companies need to adhere to. So an example might be phone systems or telco, these kinds of things. We should all be able to SMS each other. It doesn't matter if I have T-Mobile or you have whatever else they've all merged. So I forget which ones are different. But I should be able to send a text message and you should be able to receive it. Why is that not the same thing with all of our messaging platforms and all of our various other kinds of technical, you know, our software and our services? Which is so funny because when you ask really just straightforward, self-evidently, you know, yeah, why not? Questions like that, I think that was a 25-year effort to make phone numbers interchangeable and portable. And Tom Wheeler, I don't know if Tom's here today, but over at the Kennedy School FCC chair under Obama, like worked on Nancy, the North American numbering consortium that's there making sure you can port your phone number and the European Union came in so that at last, iPhones have a USB-C connector. I mean, it sounds pretty technical, but I guess it means you don't have to throw out all your chargers if you no longer like iPhones kind of thing. And you're saying those sorts of things for interoperability would be good. I don't know if others have a sense of a vision between centralized platforms where you kind of just pick the best one, and that's kind of your friend graft and where you're going to live for a while, like the original choice you make of Apple versus not Apple, versus these distributed systems that seem to offer the promise of like, if I don't like this mastodon server, I can move to another one, or maybe I can port my friend graft to blue sky. I don't know. Anybody have thoughts on the promise or peril of decentralization? I mean, it's both, right? Like, I think we are at an incredibly exciting moment in the history of the internet because for the first time in 15 years, there is actual serious innovation happening in the social media space. And we're seeing a lot of new entrants. Mastodon is a big one. Blue sky pebble. There's a number of others. And we're seeing development of infrastructure that can help make some of these services interoperable, that can give people meaningful privacy choices. This is super exciting. Can I just do a quick just snapshot 2023 completely social scientists everywhere, like, don't do it. Like non random group. How many are on Twitter slash X? I see a number of hands up. How many are on mastodon? More hands than I expected, but fewer than Twitter. How many on blue sky? More. Are there more I should be asking about threads, threads? How many threads fans? But here's the real how many how many people consider LinkedIn like one of their primary platforms where they post there you go. I would like to add you all to my professional network. LinkedIn. LinkedIn is the LinkedIn is the sleeper, the sleeper cell and particularly for this crowd is going to talk about playing the long game. Yeah. Yeah. Yeah. I think, you know, just for the interest of having some friction, like I I agree with you all that we're at a tremendously exciting moment for social media because it feels like we're kind of staring into the maelstrom of a broader sea change. And so it's exciting to like sort of see what might come out of that. I have skepticism about some of the federated platforms because I think that they don't because it's so reliant on establishing the protocol and the focus really if you look at the blue sky mission is not about this app experience that people have had fun with but really about building this federated protocol. I think when you have that focus, you sometimes lose the ability and flexibility to innovate on user experience, which is what actually draws a critical mass of so there's the Steve Jobs thing of like it just works. It's just works. It's seamless. And then there's the build your own Heath kit radio kind of thing. Yeah. It's like it's like and you know those of us all everyone on the stage know when you build systems defaults matter and people don't people don't switch off the default. So the idea that yes, people can build their own graph or they can cure and select their own algorithm for what gets amplified. That is technically interesting. And I am excited that people are going to try to do stuff with it. I am deeply skeptical that that is going to be a feature that people opt into unless there could be some way to make it pretty easy to do. Yoel, you have a thought on this. Yeah. No, I mean, I think that's exactly right. And more than that, like I was trying to fight with you though. So it doesn't wait, let's talk about another failure condition of these systems. Moderation, right? Tarleton Gillespie is a professor. A Cornell has written that the primary commodity that platforms are selling their customers is not the buttons and features and uploading a picture of your cat. It's moderation. That's the whole ballgame. And what we're seeing with most of these new entrants is that they have to reinvent the stuff that the big platforms spent 15 years figuring out. We saw that with Clubhouse. Exactly. And so you see platforms that mean well, that want to build, engage in consumer products, stumble headfirst into the age old classic problems that every platform has wrestled with. Like, how do you block people from including slurs in their usernames? Like, you deal with that by having a list of slurs that you block in usernames. But how do you come up with that list? Well, you write down all the bad words you can think of and then you pass it to your friend. There's not an international standards organization list of slurs. Yeah. Well, so that's what we're thinking about as a solution. But we're seeing that every one of these platforms have to do the same thing that every other platform has already done. And in the interim, people get hurt and harmed because the technology isn't in place to deal with trust and safety. So is that a form of public infrastructure, Kasha, that you'd want to see built? I mean, yes, or it's just about open source and sharing of information and frameworks. I mean, so when I was at Scratch, which is the largest kids' social network really on the internet, but also it's a learning program, I'm just thinking about how we did have a bad words username list. And I don't know actually that a list that's held by the government or by a third party would have been useful for us because the kinds of Somewhere the FCC chairs are like, no. The kinds of bad usernames are things like, you know, space, space, space, space, space, space, poop, right? And so, I mean, there are some very like bleep that somebody bleep that. But like, you know, so the bad words list might be different based on where you are the context. But the idea of there being a bad words list and the need for that should be common knowledge. And that kind of thing, I think, in terms of open structures and frameworks and just knowledge sharing, I think is very important. I don't know if that's if that's just a public infrastructure or just just knowledge sharing. I want this also touches on what you said before. And I think like, sort of allows me to bring in the President Obama of it again, which is the something I think he's very proud of the work that USDS did. And I think was one of the reasons why he's excited to be here is because we have folks who worked on USDS. And one of the things and USDS was also the agency that brought me into the White House. I sort of started in there and then ended up moving over. And one of the things I learned from talking to the folks who worked on USDS is that the government is very good at counting things and making lists of things. It's very good at the data gathering and saying this is a canonical set of data for this problem. And that's true when there's and that's true, particularly in when you're dealing with new technologically innovative problems. For example, one of the most important things that the government did with respect to automotive safety safety was simply establishing a standard for how we count highway deaths and building a scoreboard for this is how many people died this year. We'd like this number to go down. And like and showing that to industry and saying, this is the number of highway deaths and it's controversial. Like the Europeans don't like the way we count highway deaths. The industry probably didn't like the way we count high. You see all the same problems you have whenever you establish a standard, but that notion of bringing technologists into government to answer the problems of what is the core infrastructure that we need to be to to hold a better mirror up to industry is, I think, one of the the legacies of the US Digital Service Program, but also a way in which government in general can try to engage on some of these these questions, whether in social media or AI or anything else. I still want to figure out if we and if you would figure if others are on the same page about the ultimate goal here. I could see if we want a couple clicks further that there might be excitement among, say, the four or five of us about some kind of federated system so that if you have built the next clubhouse in your dorm room and content moderation was the farthest thing, possibly appropriately because it was for you in 20 and then 200 and then by Thursday, two million of your closest friends, if there could be a plug and play content moderation module of the sort that Block Party is at the individual level for somebody trying to pick up the slack for what they feel is not the right setting of the dial for a platform, is that a good thing? It's going to be set and forget it and it might well be a kind of commons where I don't know where to the complex decisions about what lawful but awful speech stays and what goes. How does it get made in that kind of collective system? I mean, that's the risk, right? The more that we centralize these systems and it could be in the government's hands, it could be in philanthropy's hands, it could be in academia's hands, you end up embedding different organizations values in that. And I worry for the future of the Internet about homogeneity as a negative outcome. I worry that all of the Internet starts to look like Facebook. And you know what? Facebook is probably right for a lot of people, but there are values embedded in Facebook's moderation. There are values that suggests that nudity isn't OK. And those are not universally held values. And when you start to make that the default for not just the existing platforms and their users, but for every new technology that emerges, I think you start to end up with an Internet that lacks the diversity that meaningfully empowers consumer choice. And that's a problem. And the more formalized it gets, the more that government gets involved in defining what those standards are, I worry that we start to have the sort of regression to a kind of bland, undifferentiated Internet that makes it harder for people to find the magical uses that, you know, as when I was, you know, a teenager in the closet figuring out what it meant to be a gay guy on the like I learned that online because there were different spaces on the Internet with different rules and norms and governance structures. And I think we need that. I think it would be bad for the Internet if we didn't have it. Tracy. I think oftentimes people don't realize that moderation is a flip side of recommendations. And it's not like these two systems operate entirely separately from each other. Most people are familiar with the concept of the algorithm that determines what you're going to see and they think it's all about what gets boosted. But moderation is a big part of what is not even considered in the candidate sets. And to Joel's point, if we have that centralization, there's a single algorithm that people cannot opt out of and is just dispersed across the entire Internet because it's been plugged and played. We lose a lot and there are those inherent values that are embedded. The fact that users, people can't choose what they want to see. If I go to the grocery store and I can choose to not pick up magazines or newspapers, that's great. I can choose to read the scientific magazines if I want. When I go on Facebook, I just get tabloids all day long. I don't get to choose that and there's there's no way to opt into a different user experience there. And what's also dangerous with this is if the platforms are the central authority, they determine that algorithm. They can be co-opted by authoritarian governments as well. So if there's a country somewhere that wants Facebook to show certain things and they have enough leverage against Facebook, they can force the algorithm to do that one thing. And so I think the ability for people to choose different experiences, to choose these different algorithms, it's a very technical term. Most people don't care about choosing their algorithm. But if they could choose a different experience, I think you would dramatically change the face of the Internet and also make it much more authoritarian, resistant. Or even choose proxies that are like, I want the Internet that Ralph Nader wants me to see. I'm probably the only person that would say that. But, you know, I'd like pick your celebrity, your advocate, or even a blend. I'd like something that makes it an easier choice than a company. You get an AI avatar of Ralph Nader that like sort of reads the time. I think also we're talking about a culture change and our expectation of technology. I mean, even this notion of I should be able to choose my algorithm, you're kind of assuming that people know that they're getting served something that's different from what someone else is getting served. And so I think we kind of have to also go back to the beginning and talk about people's expectation of technology and the types of things that we can, you know, when I'm walking around in the world, that there's not like someone with a clipboard following me around yelling things at me and saying, turn left, turn right. Like that would be crazy. Right. Like I have the expectation that's called a digital assistant. Yeah. OK, I'm not there yet. Maybe one day I will be, but currently I'm not. So I'm glad you just didn't you like totally went through Garmin without turning it on. Yeah, that's right. I just turned I just silenced it. But yeah, so I mean, I have a reasonable expectation when I'm out in the world that I have a certain amount of privacy, that I have a certain amount of choice that, you know, and when I go online, a lot of people might think that they also have that level of choice without realizing that they really don't. And so what would happen if we went back to the beginning and said we required that companies allow you tell you that you're seeing things for a certain reason and also let you say I'd like to see this as somebody else or turn off all personalization. And you could actually do that easily, not not buried somewhere deep inside and knowing that you have to, you know, looking for what you what you know you want to find when no one else knows that that's even an option. Right. So I think it has to do with culture change too. And our expectation of technology. Yeah, it's amazing to think that even the companies themselves at this point may not have in one person a sense of how their own algorithms work at this point. But maybe I should ask you from where you've seen so much go down. How do you think about radicalization? A term bandied around a lot. Which I think is if there's anything to credit in it. A pushback against the user empowerment story because it says that people at a vulnerable time may be a vulnerable time of life. A lot of folks here started as kids as their first internet experience. Kids are getting online and maybe out of a sense of trying to make more secure school environments, they're not getting their introductions through school or through other trusted adults. And then they can go down a primrose path where their choices and what they're asking for reinforcement on could be quote radicalizing. And I'm just curious for that story, how much do you credit that story? What's it missing? And if there's something to worry about there, how would you deal with it without the kind of parentalism that Kasha is worried about? You know, I think I think this is the pitfall of the consumer choice analogies that we're really drawn to. And I'll note I think it's a particularly American fixation on consumer choice. This is not a universally held view of how people should engage with the Internet or how the Internet should be governed. But what's the other views or view in a word that there are certain values that as a society we can agree upon should be built into our technology and that our democratically elected representatives, i.e. in the European Commission, should express those in regulation and that that should be binding with significant this is where John Perry Barlow said the First Amendment does a local ordinance. Right. But you know, like we're as Americans, we're like, yeah, you go to the supermarket and you can buy your fruit loops or frosted flakes or whatever. And that's how we go through the world and how we expect to deal with things. That's not how online harms work. I think consumer choice and the ability to sort of say, I want this filtered out and somebody else has a different standard works for some things and it completely comes apart at the seams for others. At Twitter, we would talk about the difference between something being a perspectival harm. It's bad from your perspective versus something that is a global harm, something that is dangerous whether or not you see it and whether or not anybody else sees it. Imagine somebody posting my home address, a thing that I'm sure many of us have experienced, like that's a global harm. It doesn't matter if I see that somebody tweeted out my address. It matters that my address is out there on the internet. And and in the kind of distributed community network we've been talking about, that might be a lot harder to lower the boom on because it's not just three platforms that could potentially stop it. That's right. Yeah. And it's it's also not something that fits in this framework of consumer empowerment. Right. When you hear Nick Clegg talking about Facebook's vision that we'll talk less about Facebook's choices and it's more about you picking the experience you want. Like that's great. But where is the line of responsibility for the hosts of these platforms? And there's a battlefield there. And if the edge providers, the platforms start to kind of opt out of it because they're sick of getting yelled at in front of Congress, which like I get it. But but at some point that fight happens. Is it the infrastructure providers? Is it Cloudflare? Is it Amazon Web Services? Do you feel you've come to answers to these questions? No, absolutely not. Nobody has satisfying answers to it, but we need to be having conversations in these terms. Where is the line between where consumer choice is appropriate and where it isn't? And I think there would be significant dissent even in this room about where to draw that line. And at the risk of going meta, no pun intended, who's the we that has those conversations? Well, exactly. Like of all of the failures of democratic governance of the Internet, it's that there aren't real structures for even attempting to create consensus around this. Oops. Got it. Let's talk AI. I feel like we kind of have to. And I don't know, Tracy, if you want to get us started bearing your AI studies for the kinds of things we've been talking about, to what extent is AI a solution, a complication, both for content moderation? Is that module maybe I could turn on less a painstakingly artisanally crafted list of bad words like poop versus the AI thing has been trained on all sorts of bad content. I know where we can get some and now works pretty well. I don't think I have a very satisfying answer, but AI is a tool. It's just math and data and some models that we build and it can be used as a tool to accelerate the review of certain types of content. But there's still a lot of inputs to the AI that we have to discuss and consider. What are the values? What are the types of things that we're going to say are okay or not? What types of decisions we want the model to make? What's the threshold that we set for okay or not okay? Is it different in different contexts? Do we train on different data for different communities? There's still a lot that needs to be discussed. So it's not a magic one that we can waive and make problems go away. That makes me feel that's carrying forward the unimpressed in a good way. That's maybe like, hey, it's just a tool. And that's if AI were used by the platforms. What about the use of AI by participants on the platforms to do a kind of new generation of brigading sock puppeting that I could be corresponding with what I thought were 20 really interesting people who were also, this is what makes them interested in my work and we go back and forth. And it turns out they're all just chat GPT working slightly over time. And at some point those friends of mine start telling me how important a Rolex watches. Yeah, I think AI can be very problematic in increasing the distress that already is rampant across platforms. There's this concept of the liar's dividend, which is that even if something is a true video, somebody could say, well, but that one's a deep fake. And then now there's all this uncertainty that was done at paralysis. If you no longer know if anything is real or not real, so you can choose to believe the things that you want to believe and anything that you don't like, even if it is real, you say that's a deep fake, something that you want to believe, even if it is deep fake, you say no, but that that looks very genuine. And so we now have an even bigger problem trying to figure out what is acceptable or not. Are manipulated videos acceptable? Do we disallow any kind of manipulation? Do we disallow any kind of synthetic content? I don't think the answer is clear. Kasha, do you have a clear answer? Jay-Z, do you have a clear answer? This is why we have the applied social media lab. Seriously, this is why we have it. Yeah, I mean, I think there, if we go backwards to again, some of some of the expectations that we should have, that maybe we don't, if one of them is that we should know that we're interacting with an AI and you want to start from that point, that becomes a technical problem. You can start thinking about actual solutions or building implement something that you can actually implement, some kind of intervention to be able to identify what is fake and what is real. Right? But the way that these technologies were built, there was never the initial parameter of if something is built using this engine synthetically, that there should be some type of a watermark or some kind of ability to then reverse engineer and say, is that thing fake or is it real? So again, I don't think that all of these issues are impossible. There are technical solutions if we scope them the right way and if there are the right motivations or the right expectations that we have of the technology and we require it. Got it. I love how it's now the applied statisticians for AI are like, now is our time and now it's like the trademark people are like, this is what we were put on the planet to do. We're going to have marks on stuff to indicate quality or provenance. I've had a strange idea going around for a while and this seems a great group to just put it to trying to navigate exactly among the fact that we're not entirely sure what we want. We don't trust anybody to give it to us, but we need it now. And the idea which I acknowledge seems bananas is that when Mark Zuckerberg told a congressional committee that you don't want me, Mark Zuckerberg, judging political ads for whether they have misinformation or not, like that seems weird in a democracy. And that's why Facebook will be cashing checks for all political ads. That's our contribution to democracy. He wasn't exactly wrong. He was making a point that just maybe isn't the most satisfying conclusion. My thought was what if we had as part of a program students in social studies classes in let's use the American context, American high schools that would be shown proposed ads to run on Facebook at election time and under the guidance of their teachers and librarians and for a graded credit would deliberate together on whether they were so over the edge, however the edge is defined that they should not run, that it would be a disservice to people to see them. And it turns out if you do the rough math, there are a lot of high school students. There's enough high school students to vet a lot of ads and we'll find out some way to fund it. An American rescue plan or let meta, you know, oversight board was just the beginning. Here's another three hundred million dollars. What's wrong with this idea? So just to be clear, we are outsourcing content moderation to American high school students. What? What? You're on a marketing talk that I just don't know how to do exactly right. Right, right, right, right. Yeah, I mean, I think you're getting at I think you're a piranha circling here. I like the idea that, you know, you've argued for another context of like of using more jury like mechanisms to do these assessments. I think there is. I think there is value in that and that is effort that we can bring to bear. I guess I would say it is not going to be sufficient to have a purely like SETI at home like outsourced system for making these decisions and that what you're going to need is I tend to think of this as like what are the structures where we can bring together all of the right players to talk about the principles at play. And that's going to involve governments and that's going to involve industry, but it's also going to involve people from civil society and advocacy. And I think there's a few contexts in which that's happened before the Christ Church call is a good example for the, you know, the issue of a violent extremism online. You know, gift CT was a another program that happened under President Obama. Like there are there are models for how that can happen. And I think and I think whether we're talking about assessing harms and threats from AI or we're talking about assessing standards for social media, we need to be looking to those kind of structures that allow folks from across society to see where their seat at the table is and how authority has been devolved from the platform companies to some other entities because that's my ultimate advice for the platform to the Zuckerberg quote that you gave. You know, the beatings will continue as long as you continue to hold on to that authority, right? Because there's always going to be another example of how you messed up. There's always going to be another example of well, you know, you said you were against the Nazis, but look at all these Nazis we found, you know, and so you've got to find some way of devolving that authority to some other group and the groups that I mentioned are the ones that I think need to be around the table as opposed to just a what you're describing, which is I think could be a part of it, but is it would be likely in my view insufficient? Yeah, it sounds like we need new tools in the toolkit, especially with trust at such lows for the old tools, new institutions, new institutional configurations, things that try to draw the best of public and private and community rather than what sometimes seem like the worst. Yes, and just to like, you know, again, kind of go back to what I think President Obama would say as he's here, he views this as a significant focus of the post presidency, like the reason why he spoke at Stanford last year about misinformation issues, the information economy and why he was excited to come here and is looking forward to visiting the futures with members of the lab and why we also have voyagers for the Obama Foundation here is and why, you know, disinformation and issues of AI are going to be topics that we discuss at the Obama Foundation Democracy Forum in November. Like this is, he views like sort of using his post presidency as a way to think about what are the structures, what are the convenings, what are the ways in which we create these lines of effort because it's going to require a lot of, it's going to require participation from all those groups I mentioned. Other thoughts? Well, we're nearing the end of our time. I always thought of Mark Andreessen as that nice Netscape guy and apparently he had a manifesto the other day. I couldn't check because he blocks me on Twitter. Hi, Mark. I don't know why. I don't think he probably uses Block Party. But part of it says, it is time once again to raise the technology flag. It is time to be techno optimists, combine technology and markets and you'll get what Nickland has termed the techno capital machine, the engine of perpetual material creation, growth and abundance. Our enemy is the ivory tower, the know-it-all, credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, unaccountable. This is a run on, sorry, that's, I'm playing into his hands. Playing God with everyone else's lives with total insulation from the consequences. There's definitely a lot of views out there from some industry folks. I'm just curious, if you had a Jeremiah manifesto, a few words, what would you want to offer into the discourse as people think about our digital future? That's one vision. And I'm sure we could even find elements within it to agree upon. But I don't know. Anybody want to offer their own couple sentences? I'll take a swing at the manifesto. The idea that's a very free market manifesto about the need for continual growth and the need from a relaxation of all constraints, regulation, trust and safety, those things, I think what I would say is, even if you are a free market maximalist, like if you are a anarcho-capitalist, the idea of imbuing democratic values in technology platforms is not at odds with your end goal, in fact, is in service of it. Because the entire system that exists that allows you to reap the benefit of this market is built upon the traditions of democracy, whether that's rule or law, participation in the market by all parties. Like those things need to exist for these platforms who have achieved the dynamic growth that you rely on, both for your ethical reasons as well as your financial ones. And to prove this, you don't want to be a billionaire entrepreneur in China. Like that is a context in which, if you run a file of the state party, you can just be disappeared. And I think that is, I think that is an important, I think that is an important thing for us to remember is that it's not at odds with the growth and success of these platforms that we consider things like how to imbue them with democratic values, how to make them more safe. Instead, it's what allows the dynamic innovation economy that America has pioneered to exist. Got it. Thank you. I feel like I should have been taking notes. It's very good. Anyone else? Tracy? I think everyone deserves the right to be able to participate in a healthy, safe, digital ecosystem and partake in digital prosperity. I think we can be a lot more ambitious in public-private partnerships the government to achieve really audacious goals around internet governance in the way that we did the space mission and COVID vaccines. It's an alignment of government with private industry to solve the really difficult problems. Got it. Thank you. You well? You know, I tried to read the manifesto and it's about like 400 odd sentences that all start with we believe. Yeah, I had Cloud summarize it. It's a statement of doctrine and I guess my response to the manifesto is a counter manifesto which is like simple we believe statements don't work for internet governance. If you think there is an obviously right answer to any content moderation question to any governance issue there isn't. You have only bad options. Every decision is bad and your task is to figure out what the least bad one is given your goals and constraints and threat models and it's naive to believe or to expect that there is a single universally right answer. Sorry, Netscape guy. Gasha. Yeah, so something that I definitely saw when I was working in government people would come and say we'd like you to make an app that's going to fix this problem and we'd say, OK, we'll take the under consideration but like why don't you tell us about the problem because often technology is not the solution. It's a people problem or it's a process problem or it's a culture problem or something like that. So I guess I don't like actually really believe in manifesto so but you're forcing us to create some. So I guess the line would be something like technology might not be the solution. People fix problems not technology. Fair enough. I love the old I'm not much for manifestos but then you like drop mic. So the concrete over the vague action over complacency imagination over status quoism both literally the status quo and thinking there is but one inevitable trajectory that we're on. And it's just a question of whether we close our eyes as the roller coaster ratchets up the hill. These are the things we want to elicit and cultivate in just one corner of this analog and digital universe as we start this lab. We have two further sets of discussions that follow really hope you'll stick around for them. This is the most secure environment you've been in a while and we have our faculty panel coming up we're going to hear about all sorts of ways in which concrete over vague imagination over status quo action over complacency our faculty have been doing extraordinary things. Just I see Latanya Sweeney right there and her public interest work and the way she is especially for the students here helping think about public interest and get people involved in public interest technology and then a follow on panel with Nabiya Sayed Nancy Gibbs and Martha Minow on the state of social media right now and what's next really hope you'll stick around for this and I just I am so grateful we are so grateful to our panelists for your contributions today for just such a sprightly conversation with like eyes open towards the realities and the negative stuff but also a determination to see what it would be like to build something that works better for everyone. So thank you all so much. So grateful.