 So thank you all for those remarks. Some people are guiltily tweeting them out, I'm sure, after that. What I want to do is I want to ask a few questions myself, and then we'll open it up to a couple of questions from the audience. Tull, I want to start with you. To the extent that we have to, we're going to have to live with this reality for some period of time, even as we move toward solutions. You made a comment in your remarks about the right way and the wrong way to report on this. In the sense I get, many of you talk about how many reporters, so many of your kind of counterparts these researchers talk to every day, just for folks who really are in the media and thinking about this, or even in community where these issues are coming up, what's the right way, what's the wrong way to talk about this phenomenon, when it's happening in your community? Well, I mean, there is no one size fits all here, it depends on each case we're talking about. In the case of the New Zealand attacks, if a reporter didn't know that a guy's claiming that some imaginary character is his hero, we'll actually report on that, and it'll make print, but if they are aware of what he's even referring to, they won't. It's basic digital literacy, and I think it affects not only national reporters, but local reporters, especially those that haven't grown up on the internet, quote unquote. And if, you know, I don't know what form or shape it would take, but just familiarizing yourself, reading the work of other people, these sort of things would go a long way in how to and how not to report on these issues. So I want to ask you a question, pose a question to both of you. Kalal had on one of his slides a listing of some of the attributes that enable hateful content, so the visibility quotient, anonymity, I can't remember what the other two were. Accountability. Accountability. Accountability, sort of lack of accountability. You know, there are, I think there are some, certainly in Silicon Valley who would argue this is exactly the power, this is the liberatory power of the internet. They might say it with a kind of messianic devotion, but I think there are others who would say, look, you don't have me too without some anonymity. You don't have me too without the ability to get Garner public visibility. That would be an example of reporting where news organizations actually change some of their standards of corroboration in response to an online social movement. So when you, how do you think about that? How do you think about the challenge? Are we diluting ourselves when we think of these as strengths of the internet? Or is it about preserving the positive power of those attributes without drowning in the oppressive side? Yeah, take the example of, you know, I would love to put a tax on everything and pay for public institutions and really do college the right way. The minute you tax surveillance advertising, you get into a situation where you've sanctioned it just like big tobacco, right? So we're back at the beginning. And part of surveillance advertising, the actual way things work, is that anonymity is no longer possible. You actually have to make significant investment in remaining anonymous to platform companies in order to remain anonymous. And if you remain anonymous, you don't get the lift and the bounce that you need to do network harassment campaigns because you're worried about who's going to be connected to you. Just by being connected to you, that reveals something in some other network. The way in which we de-anonymize accounts at this stage, you don't just look on Twitter for evidence on Twitter. You start to look around and you look on other platforms. And so I have an article in the Journal of Design Science and a special issue that Ethan had put together about on the internet nobody knows you're a bot, right? Which is a riff on this old adage, on the internet nobody knows you're a dog. And it's a cute little New Yorker cartoon with a dog on the computer, right? It's great. But it doesn't hold true anymore. And I think we have to think about the last at least 20 years of social media and realize that, yes, they have built surveillance technologies that are really good at shielding people that they don't want to hold accountable for using the tools that they've built. This is not hacking. This is so low-tech. It's using the features and it's bringing massive swarms of people to basically shut up journalists that are troubling the way in which they want to spread disinformation or the way in which they want to attack journalists or professors. And I think that accountability has to start with platform companies being much more transparent about what they know and what they don't know and what they're acting on and what they're not acting on. So we see right now in the disinformation space a lot of focus on creating this idea that disinformation happens out there. It's a foreign operative problem. They're not looking at the domestic space because they'd have to challenge free speech. But what we know about disinformation is it is as much a domestic problem as it is a problem of intrusion. And so there's a lot to unpack there, but no, anonymity is not guaranteed. And if people know what the rules are around anonymity, they're much more likely to think differently about their social media presence. So Safia, you on this kind of positive potential versus oppressive potential of the same attributes. Well I think part of the challenge is we are wrapped up, we brought the big we, wrapped up in this idea that the platform is somehow neutral and it can be deployed for good or for bad. I think that's a false start. The truth is that the platforms are designed for both massive extraction of information about us, but also to return as much profit as possible to shareholders. And one of the reasons why we're caught up in the idea of not particularly holding the platforms accountable is because the platforms have invested deeply in the notion that they are not media companies, that they're just kind of the dumb pipes, so to speak. And this is where we get into section 230 of the FCC guidelines that govern the internet. And in this sense I think it again puts the onus back on the public. How does the public use the internet, either for good or for bad, for me too or for white supremacy? And I think that what scholar, critical internet scholars are doing is certainly what we're trying to do at the UCLA Center for Critical Internet Inquiry is destabilize these ideas about platforms being agnostic and that users are either galvanized or information is weaponized, but that they actually play a really integral role. Ethan mentioned content moderators in Philippines. It's because of the work of scholars like Sarah Roberts who wrote the first academic study discovering that there were people all over the world who are moderating content and taking content down and working within all kinds of nation-state rules of engagement around content. That work alone destabilizes the idea that the internet is a free speech zone. But in fact what we know is that they are doing brand and reputation management work. And in some countries your brand is not tarnished when you let white supremacist rule on it or you let GamerGate go down in it. In fact it might bolster your brand. So these are values questions that we really have to hold up in a different light and talk about rather than again letting the platforms off the hook. So I'm going to ask Ethan a question and then we'll open it up so think about what you might ask. And just remember questions are interrogatory not declarative, it's just a pro tip. So Ethan you know I have a paradigm question for you about some of the concrete things that you advocated which is I think someone could leave this discussion still wondering do we face an eradication problem from a public policy perspective or a resilience problem from a public policy perspective. I think someone could look at Minnow and say it's unclear whether he was sure whether this was an eradication or a resilience problem. From a paradigm perspective how do you think about it? So I'm clearly advocating resilience and not eradication as much as I would love to press the button and delete all white supremacists and all hateful voices from online spaces I don't see good ways to make that happen. What I actually think we want to think about is how we attack existing platforms that trend towards radicalization. It's very easy right now to fall into a data void as Dana Boyd talked about here where people have been so effective in creating data to lead you towards their agenda that it ends up dominating the search engines. It's very, very easy to find yourself on YouTube not necessarily just following the recommendations although that can be one piece of it but following these very careful paths that try to take you from seeking mental help for depression to then finding yourself facing self-betterment and that rapidly turning into men's rights which has a way to then turn into white supremacy. So we need to look at ways that platforms are already being exploited. For me what I'm pushing for in the long run is a participatory internet that is much, much more heterogeneous that has many, many more platforms where the vast majority of them are actually governed by the people who use them. It's much closer to a model that looks like Reddit than looks like Facebook. For people who don't know Reddit well they're probably wincing at hearing that. The interesting thing about Reddit which we're doing a lot of research on is that it is not a vast cesspool. It's actually an enormously complex community much of which works really, really well with a small number of cesspools. That is the resilient future that I am hoping for is one with fewer cesspools that are less powerful at sort of taking over and infecting these enormous networks that we have little control over. So just to be really clear I don't have a good answer to the minnow question. My question is very much one around trying to get far less centralized, far more distributed and then working individually with those communities when they continue to be communities that are deeply problematic like a community like Aitkun or Kiwi Farm or some of the truly awful ones out there. Thank you. We have time for two questions. One back here. Hi. I'm Chastity Pratt. I'm an even fellow at Harvard. I'm launching a media co-op to report on school funding and I have been attacked by hateful content online. So I'm wondering if Joan and the rest of the panel could give some tips to the media and the media funders here about how we can protect our content from these online predators who really just want to disrupt the work that we're putting together. Yeah. Yeah. I mean from my perspective it's those small number of cesspools that are allowed to persist and in the way in which they are marginalized on the rest of the net they become highly motivated and mobilized to do damage to other communities. And so you might have a small group of people. You don't think that you should be, you know, you're not doing anything so crazy that you would end up being a target but they come across your content and then they start to strategize. They actually start to think with the tools of where you are and think about, well, if I can get into their comment stream or can I get into their DMs or can I, you know, they're getting some media attention. Can I, you know, pose a counter narrative or a counter story and plant some kind of other story that will then push our agenda, right? And you can imagine with school spending that there's a lot of idea about who actually is using school funds and I've of course read a ton about, you know, school resources being used to deal with ESL or English as a second language, of course that brings all the races right in. And so that's not an answer to your question but it is getting at what I think we need to think more about which is holding these platform companies accountable. There's all these places on the net where you can do content moderation even at the level of the government. The government has like an on off switch. They really can't do anything but they can do one thing. They can shut down literally the power grid to turn things off and we've seen that happen in other places. At the level of the individual though it's really hard because they haven't built tools that we can use. There were a few instances where people had mass block list that you could upload and put into your social media but it didn't stop others from seeing the bad stuff around your content. And so I think we have to, if as we work on accountability in the space and transparency we actually have to work on a toolkit that is a toolkit of refusal that says no. We don't want other people seeing this content or we want to be open to some communities but when we see evidence of a swarm we want you to help us as a social media company help us quarantine and bracket that. Because most journalists what do you do? You shut down your account for a couple of days. That's what you do. And that's a terrible thing to do if you've just launched a really good investigation and you're trying to make impact and you're trying to listen to your audience and the only recourse that you have is just to shut it down. I don't think that's useful but right now I think we're in a problem space where the tools just aren't there for what we need. We have time for one question over here. Yes sir. I'm the publisher of an independent publication in Long Beach, California and we have a problem in our city that certain neighborhoods, our next door pages are particularly grotesque and there's two issues both the hate language but also misinformation. We've taken an approach of trying to proactively combat and engage to correct where possible which we have some people in our newsroom some of our leaders that say just don't ever engage but this is a resource intensive practice for us. An example of this is there was a rumor recently about a homeless shelter is going to be built in your neighborhood park and 15,000 comments later we finally got control of it. It overwhelmed two of our city council offices, it overwhelmed the city council meeting for a while. My question is are there any best practices or counsel about whether to do this kind of direct proactive engagement to correct things like this or information, any resources, any tips because it's exhausting and it is resource intensive for us. I mean I would just offer that it's true that education and educating is incredibly resource intensive. In fact I remind my students that even though they're an aboard with things like search engines I ask them why did they go to UCLA then if all knowledge can be known in a search and then everyone gets reoriented. So I think those investments have to be made because those are also about democracy building projects. I mean direct engagement we know from the learning sciences for example those of us who work in education that learning is iterative. It requires human beings to kind of go back and forth to engage with ideas and material research in particular and that is how learning happens. It really isn't just a static push and then you digest and now you're educated and you know. So I guess I would say one of the things we think about a lot is slow information movements versus fast information movements. Looking for other ways that people have come to understand that the local farmers market and the slow food movement is actually a paradigm that we can relate to when we put it up against mass multinational corporate fast food right. So our information environments and our knowledge environments and being very clear you know one of the worst things that I think exists in the field of talking about information today is the flattening of knowledge and information and propaganda by calling it all content. You know content is not content and we need much more sophisticated nuanced ways of understanding knowledge evidence research even flattening it and calling it data isn't particularly helpful and I think that work is really important and maybe there are ways to partner with CSU Long Beach the community colleges others people who are invested in an advocacy organizations community organizations who are invested in deepening the knowledge. Ethan you want to get in. Let me give a model that's worked remarkably well in Mexico around political disinformation. There was a project set up to try to debunk rumors on WhatsApp and it was set up by a journalism site a news site called anima politico it ended up recruiting 99 other newspapers in Mexico. And what ended up happening was if you saw potential disinformation on WhatsApp which is a very hard network to monitor it's encrypted we don't have a good way to sort of come in and look at it you could post it to the newspaper and the newspaper would run a fact check and a group of a hundred newspapers ended up fact checking an enormous amount of information within the election and then people would inject those fact checks back into the WhatsApp threads and it did remarkably well. What's happening with next door is is this sort of disappointing model that we've seen happen again and again and again which is it's essentially an extractive product it invites people to weaponize their fear mostly of black and brown people and use that to create an endless stream of content which by the way can be micro targeted to them based on their geography it is not the sort of community that you would really want to create as a healthy community site but I offer this to the room what is a community site you'd want to create if that's the example of sort of what we want to avoid and the example of what happens when we leave this task to the market what can we actually imagine creating a long beach and maybe your paper is a major actor and creating that along with the community college along with other aspects of it how do we look for something better at the same time that we're fighting something worse that that's the balance that I think we have to take on so I mean I sort of had three takeaways from this from this conversation the first is confirmation bias which is a very patriotic emotion with the late afternoon joke really did not land there so but I think what you know in the questions that we're getting I think affirm this which is you know I think your point Safia that knowledge is really power here the the folks who are studying these issues are the equivalent of frontline public health workers of a hundred years ago and in every community we've got a university a community college where I guarantee you there is an engineering faculty and a social science faculty that would love to have these conversations and certainly a student body that is interested in living this world and interested in having these conversations so I wouldn't I think there's a real takeaway here is to invest in that this is a civilizational achievement that we have this kind of knowledge base in the world the second takeaway I have is that this is we should we should embrace the fact that the way social media now is is radically contingent we do not have to accept the social media environment we have as inevitable and just asking the question of how it could look different I take from all of you is you eat in your reporting and in your research you've asked the question why this exists what are the forces that are causing to exist what's the harm and I think the third is that you know I've heard from all of you that solutions can actually start in community we're not powerless people can either take action to create online communities that are more positive at the very least are exemplars of what we should have even if they have to go back to forms of social media that feel less positive but secondly that in a moment of heightened public anxiety and therefore policymaker response we should be clear about the kind of communities that we want in addition to specific role changes that we might want to see so I think those are incredibly positive takeaways for a group that really is on the front lines of the effects of this so please join me in thanking our panel for their contributions