 I'm Anne-Marie Slaughter. I am the CEO of New America and welcome. This is an event we've been looking forward to and work that we're very proud to be unveiling and talking about today. Although I would prefer that it were not such a timely subject. But given where we are as a nation and a world, combating online hate is essential, and doing it in the right way is what we're going to be talking about this morning. I grew up in Charlottesville, Virginia. My parents still live there. My brother, my sister-in-law, my aunt, my nieces and nephews. What happened in Charlottesville was deeply personal, as well as horrifying on so many levels. And Charlottesville has a complicated history but thinks of itself as the home of Mr. Jefferson's University and the Declaration of Independence and has a lot of history to face, but to then be known as the symbol of white supremacy and hate. It has been a process, I think, for people in Charlottesville, for people across Virginia. And of course, it is just one of the many explosions of hate in Poway, in Pittsburgh, and of course globally. In Sri Lanka, in New Zealand, one of our newest staff members just came from New Zealand and had been personally touched by what happened there. And when I was in Houston, a meeting with Bassem Hamid, who is one of the people who works with Bobby, and he's a Syrian American who works with Bobby, his cousin was killed in the Christchurch massacres, just in the sense of how we are interconnected nationally and globally. So we face this rise of hate against many different minority communities. Any of us who are on social media know the ways in which social media is being used. As somebody who went on Twitter in 2011, I can tell you it was a completely different environment. And I can now barely use it if just the, not only the personalization of the way social media is used, but of course the hate to which we are now exposed to on social media, and of course the way social media is used to recruit others. That's going to continue to grow. I think you can't shut down social media, so we have to expect that that is part of our lives. But the calls for stopping hate on social media, it's a complicated question. It's a complicated question for First Amendment reasons. It's a complicated question for open society reasons. And so at New America and at ADL, we want to think about how you monitor online hate in a very data-driven way. We need data. We need evidence. We need as much science as we can, even in a deeply, deeply emotionally charged setting. So rather than proceeding on fear and intuition, we have built a dashboard that will offer both a visual and a qualitative understanding of hateful content. And that's what you will be hearing about and discussing today. And that dashboard is a tool then to drive discussion and ultimately action about how hate spreads online and how to counter that. So I'm going to turn it over to Jonathan Greenblatt in a second. But I also want to say ADL has been our partner in this work. And that, to me, has enormous significance. I come out of the foreign policy community. I come out of a world that has defined terrorism in recent years almost exclusively in terms of radical Islamic terrorism. That is not right. We know that's not right, but that is the way the foreign policy community thinks about it. In that world of terrorism, Jews and Muslims are on opposite sides and in many increasingly frightening ways to be able to think about hate more broadly, about white nationalism, white supremacy, white nationalist terrorism, terrorism of many different kinds. Terrorism itself is a complicated word because it insists on a political motive that is harder to see when the politics are closer to home. So if you think about the Charleston massacre, to me, that was unquestionably political terrorism. It was harder for many people to see up close because of race, because of religion, many different factors. Thinking about it as hate and violence and thinking about who are the targets of hate and violence, in this country, Muslims and Jews and many others are on the same side. That, to me, is enormously important. As much as the work that we're doing, how we are doing it, says there are a new set of alliances. There's a new way of thinking about a lot of these issues. And if we can target hate without preconceived notions and labels and monitor it and think about who are the victims of hate and the violence that often accompanies it, we really can think about new ways to fight it and new coalitions that can change the landscape of this country. So with that, I'm going to turn it over to Jonathan. I welcome you all. And again, we're enormously proud to be doing this work. Good morning. So I want to thank my friend Ann Marie for that nice introduction. And I could tell you, we at ADL are incredibly proud and pleased to be working in partnership with New America. So thank you for your leadership on this and so many issues. And I'm really glad we're talking about this topic today of online hate. Because at ADL, we've been tracking extremists and fighting hate literally for over 100 years. And I would say to you today that social media really is the new battleground. We are seeing a disturbing rise in hate across the board here in the United States offline. But it is indeed online that is really pouring the accelerant on this fire. And it has a different set of dimensions and different kinds of characteristics in the way that intolerance previously used to travel. But because of what's happening online, it has a kind of velocity and is happening with a volume and intensity we've never seen before. We recently did a survey at ADL and found that 37% of all Americans experience severe online hate and harassment just last year, including sexual harassment, cyber stalking, physical threats, and other forms of intimidation. This is an 18% increase over what we saw in 17. So as Ann Marie said, this is stuff we never imagined once before. And as you said, you were talking about the experience of your staffer and the person you met in Houston who had a cousin who was one of the victims in Christchurch. There is a through line from Charlottesville to Pittsburgh to Christchurch and now to Poway. And I was there in Pittsburgh days after the Tree of Life massacre last year, the most violent anti-Semitic assault in American history. And I come here today after having flown back on the red eye on Monday from San Diego. I was in Poway over the weekend, praying with a family of Rabbi Goldstein while he was in the operating room and mourning with a family of other victims who were affected by the shooting. And what we know about the gunman whose name I will not use is that he was an active participant in this sort of movement of white supremacy, which has gotten such traction online. In the terrorism parlance, you would call him a lone wolf who was radicalized online by the kind of content and with the kind of tactics and strategy that had long been employed by ISIS and Al Qaeda. This is why I would say to you, as I said to members of Congress just this morning, as I've said publicly again and again, white supremacy is a global terror threat and it is overdue for policymakers in both branches to recognize this issue and to resource it effectively and to apply the same level of energy and intensity that we've applied to the fight against Islamist terrorism to the threat of white supremacy because it threatens to consume us all. These extremists have really manipulated and utilized social media to move from the margins into the mainstream of society. And we can't allow them to exploit these online platforms anymore. And it is time, not just for Congress to act, not just for the executive branch to act DHS, DOJ. It is time for the companies to act. You see, this is a multi-stakeholder world. And the idea that Facebook and Google and even Twitter, we're going to talk about that this morning, can somehow hover over the rest of us, that they bear no responsibility for this, is just plain wrong. And these companies need to find ways to make sure that the products that they built for the good of their users are redesigned to ensure the safety of those same users. And we can't allow sort of a libertarianist philosophy to alleviate them of responsibility. We need them to use the same innovation and creativity that design these algorithms, again, to re-examine them. And if they can't do it, I know lawmakers will. And so we're going to talk today about this project that ADL and New America partnered on, exploring online hate. It's the first-time realization of this extremist content that, as Anne-Marie said, targets Muslims and Jews, targets immigrants and refugees, targets the LGBTQ, targets anyone who doesn't conform to this notion of a Christian white European race. And by shedding light on how and where these extremists operate, the hope would be that this drives conversation and that we can convert conversation into proposals and convert proposals into policy and create the change that we need. So the dashboard is just one project of ADL's Center on Technology and Society. We literally launched a center in Palo Alto in 2017 because Facebook is the front line in fighting hate. And we're doing all kinds of interesting projects and pilots in order to explore ways to ensure, again, that these online spaces are safe for all kinds of people. And New America has been a fantastic partner. Bobby, I credit you, and I'm so glad, Anne-Marie, we have the opportunity to collaborate together. So as I said a minute ago, if the companies don't get it right and we need the companies, we need the innovation, we need their ingenuity, but oftentimes it takes a nudge or it takes a threat and the specter regulation. And for that reason, I'm incredibly pleased that we have with us today Congressman Catherine Clark from the District of Massachusetts and the Vice-Chair of the Democratic Caucus. She has been a leader in Congress working to address the offline consequences of online hate. We met early on when I started at ADL and I could tell you, having formally lived and worked in Silicon Valley, there are a few people here inside the but way who really understand this. Congressman Clark is one of those people. So our surveys indicate that people being harassed, they're taking steps to change this, but ultimately it's the Online Safety and Modernization Act, which I think hopefully you'll talk about, which will be the way that we really create the change that we need on swatting and doxing and all these issues. So without further ado, I want to just thank Congressman Clark and invite you up here to give some introductory remarks. Thank you so much. Thank you, Anne-Marie and Jonathan for your work and for inviting me here today. I am so pleased to be with you and hope that this inspiring conversation that you're gonna be having today will really be a launching point for communal action. Today, under your leadership, New America and the ADL are taking an important step in our contemporary fight against hate and violence and discrimination. The newly launched dashboard, I hope you all have had a chance to go on it. It is fascinating to see what hashtags are trending. We'll help us all transition from grief to action and the ADL center is such a key component to putting this together. We have grieved together for the lives that are lost and we know that hate and bigotry have a newly emboldened role in our global community. And tragically, that hate-fueled terror has started to feel routine. Today, there were shooting students' lives lost at University of North Carolina in Charlotte. And I looked at the dashboard on the way over to see where that hashtag UNCC ranks. It is very low on hashtags. These are becoming routine parts of our news cycle. These kind of violent episodes. We cannot allow ourselves to be desensitized to the gun violence and other forms of violence that are now being seen as regular parts of white supremacy and anti-Semitism. Sometimes it's months, too often it's only weeks between new headlines and it can lead to a feeling of hopelessness. Whether it's the Emanuel African Methodist Episcopal Church in Charleston, South Carolina, the Tree of Life synagogue in Pittsburgh, the New Zealand Tragedy at Christ Church and just this past weekend at the end of the Passover holiday in Poway, California. Every day we see the manifestation of this online hate in the news and we can feel it on the rise. And we internalize these shootings at churches, vandalism at mosques, the violent attacks on people of color and our LGBTQ brothers and sisters. We fear for our neighbors, our children and ourselves because we know that it can be simply because of who we are, who we love, the color of our skin or where we worship. And in recent years we've seen an alarming rise in white nationalist violence as well as the mainstreaming of the ideology in political rhetoric. We know the roots of these crimes and the radicalization of people is taking place online. And we know that the incredible power of the internet to build community, to empower voices is also building as Jonathan said, these threads that connect us across the global community to hate-filled events. The bigoted hate-filled language has found an echo chamber in our social networks and the language from the darkest corners of society that it would be taboo in our daily interactions has been empowered by the faceless and fearless protection of online anonymity. And the security that a computer screen provides has made our communities less safe. And the internet's power to bring us together and it certainly does some incredible things and incredible good also energizes and empowers hatred by creating these like-minded communities and actors and giving them permission to act on this violence. According to the study by the Pew Research Center, 20% of adult internet users have been affected by cyber-stalking, persistent harassing emails and other unwanted online content. We know this has a disproportionate effect on women and people of color. And women who represent the majority of targets have some of the most severe forms of online assault, receive rape videos, extortion, doxing with the intent to harm and experience abuse in multi-dimensional ways and to greater effect. They are the vast majority of victims of non-consensual pornography, stalking, electronic abuse and other forms of violence. LGBTQ youth also experience online bullying at three times the rate of their straight peers. And in recent years, the internet's become an easy way for abusers to stalk victims of domestic violence and prey on vulnerable children. Just when you think you really can't be shocked by statistics, we were alarmed in our office when the Department of Justice recently declared that sex-stortion is by far the most significant growing threat to children in the United States. And that sex-stortion cases tend to have more minor victims per offender than all other child sexual exploitation offenses. We have seen these sex-stortion cases have 75 to 200 victims from one bad actor. And unfortunately, the online abuse exacts many costs, but these are also routinely minimized. Online harassment can impose a steep tax on freedom of speech, civic life, and democracy. And as Amory pointed out, one of the quandaries, one of the ideas we have to wrestle with is where do we impinge in regulating speech on the First Amendment? And where does this hate-filled speech impinge on the right to be online and to be speaking your mind and to be using the internet? It is a balance that is going to need all of us to set for our society and for the global community. The reality is that too many face every single day. Harassment, is there on the internet just trying to pay bills or do their jobs? This is a reality we need to change. The exploring hate dashboard is a crucial step because we need to be able to document, track, and monitor the prevalence of hate as also know where it comes from and where it exists. This dashboard is going to allow us to answer crucial questions like which tweets are being shared the most frequently? And by whom? Which hashtags are popular? Which topics are being discussed? What hyperlinks are being propagated? It will give us a deeper understanding of the themes, misinformation, and dangerous rhetoric that's being disseminated. And this work is so crucial, and I'm so grateful to New America and the ADL for infusing this conversation with facts and data. And where Congress comes in is in the second important prong here, and that is that we need to update our laws, to give law enforcement the tools they need to understand these modern mechanisms of hate, to start preventing and investigating harassment before it jumps into real life violence. In 2017, I introduced the Online Safety Modernization Act, and it will have in place criminal consequences for harassment tactics, like swatting, doxting, sex distortion. And it will take them out of cyberspace and put them into our criminal code. It is tricky work to do. There are many balances that need to be made, but it is an important step. We got into this work in my office when we had a constituent who was a victim of Gamergate come to us. And I have to say, I was no stranger as someone who's run for office repeatedly in the last 10 years of my career to some very interesting online observations about me, other sort of course rhetoric. But this was a new level. This constituent was receiving very specific videos about her work schedule, where she lived, being shown the weapon that she would be killed, and also her husband's information, because they wanted to make sure that he would be present as she was mutilated in front of him. All of this was brought upon her by having the nerve to be a woman who designs video games. That was her crime, and Gamergate was going to be sure that she knew there was no place for her anywhere. But what really intrigued us was the local law enforcement response when she called them. They simply did not understand, did not have the training to understand that these crimes can jump from online to real life, as we are seeing so grievously play out today. And so she was asked, what is Twitter? We have had judges in the last few years in Massachusetts say the answer here is to turn off your computer. And we need to change this and understand how this radicalization, how when people cross that line from robust discourse online that we may not like or may make us feel uncomfortable into the criminal world and into criminal intent. Some good news in a not very good news story that we're telling today is that in the recently reauthorized Violence Against Women Act, we were able to pass a portion of the online safety and modernization legislation. It will give grants to states and local law enforcement to help them do exactly what we saw lacking in the response to Gamergate. It will give them the resources and training on how to investigate and how to identify and make sure that we are connecting these crimes with the bad actors. Furthermore, as a result, we have also been able to require that the FBI now incorporate cyber crimes into their crime reporting database. And this will give law enforcement a better understanding of what we're talking about with online harassment and hate speech. If we can track it, if we can do what this dashboard does and develop this data, that is how we really begin to put the steps together to combat it. And as community members and leaders, we must continue to condemn bigotry wherever it arises. We must call out anti-Semitism, Islamophobia, transphobia, racism, and homophobia, both on the large scale and the small. We have to be fearless in the face of intolerance. And if we don't draw attention to this, the increased reality of hate crimes and violence will continue. There are not very fine people on both sides of this. We do not want a government that implements a Muslim ban or puts kids in cages. We do not want a president that refers to families fleeing violence as animals or very bad people. We do not and will not return to a world where hate has become, has a home in the halls of power. We cannot be silent and accept this behavior that we have come to expect. As one of my favorite authors and activists said, Maya Angelou, hate, it has caused a lot of problems in the world, but it has not solved one yet. But we're here today to be the problem solvers, to be the conveners of kindness and compassion. We are the ones who can put this hate back where it belongs and instead seek justice. And so we are going to collectively be what I discovered on Instagram, which is often our modern town crier. And that is, we are going to be peacemakers. Now, I know something about peacemakers because I have three teenage sons. And my peacemaking used to come in the form of little ziplock bags of goldfish, and it worked pretty well. But that's not what I'm talking about. I want to give you a quote from a book, a common prayer, a liturgy for the ordinary radical. Peace is not about the absence of conflict. It's about the presence of justice. Martin Luther King, Jr. even distinguished between the devil's peace and God's true peace, a counterfeit peace that exists when people are pacified, or distracted, or so beat up and tired of fighting that all seems calm. But true peace does not exist until there is justice, restoration, and forgiveness. Peacemaking doesn't mean passivity. It is the act of interrupting injustice without mirroring injustice, the act of disarming evil without destroying the evil doer, the act of finding a third way that is neither fight nor flight, but careful, arduous pursuit of reconciliation. It is about a revolution of love that is big enough to set both the oppressed and the oppressors free. So that's what we're going to do. And whether it is taking our beliefs, our radical peacemaking to Silicon Valley and talking to these social platforms about their role, their responsibility to join us, or whether it is online, having the conversations in our school about being media critics and understanding whether it is pushing for changes in our criminal code to say, we do not have to accept this. We can find the balances that preserves free speech for everyone, keeps the internet open to all voices who are there to pursue good. These are what we can do together as radical peacemakers. And I am so grateful to partner with you. I hope you will contact my office. I hope we can work together on this. And I'm so grateful to Jonathan at the ADL for your work, Ann Marie for New America, what you're doing, and the experts and workers that you're going to hear from today. This is worth doing. This is the work that we have to preserve if we want to establish true security around the globe, but especially here in our communities in the United States. So thank you so much for having me. And I look forward to hearing from you in the future. Congresswoman, that was enough to make me want to move back to Massachusetts. So it's my pleasure now to introduce Bobby McKenzie. Jonathan and I are the head of our respective organizations, and we get to come when the work is done and introduce it and take credit for it. But of course, we're not the ones who actually do it. And this project would not exist but for Bobby McKenzie, who is the director of New America's Muslim Diaspora Initiative. Bobby ran for Congress in Michigan. And in the process of doing so, he engaged with large numbers of Arab Americans in his desired constituency. It didn't come to be. But part of what that experience left him with was a deep commitment to both working with Muslim Americans tracking hate and violence against Muslims, which is part of what the Muslim Diaspora Initiative does, but equally elevating the many, many, many positive contributions that Muslim Americans make as all other Americans do. So not seeing this community as it is so unfairly seen only through the lens of violence and hate. So it's my pleasure to introduce Bobby and to thank him for this extraordinary work together with ADL. Thanks, Emory. Thank you, Jonathan. Welcome, everybody. Before I introduce the dashboard, I'm told that the most important announcement is that we have two panels. The first panel looks at spaces of hate, and the second panel looks at victims of hate. The announcement is between the panels, we have lunch. So I'm told that I can't start without letting you know that. So I just want to tell you a little bit about the dashboard and how it came about, the methodology behind it, and what we're hoping to do with it. So I had been working for nearly a year tracking anti-Muslim violence and crimes at the state and local level, and I have a project on that. And I'm trying to see if I've got the clicker working here. OK, great. But OK. I've got it. OK. But it was during that year in the lead-up to Charlottesville, I was spending a lot of time thinking about anti-Muslim violence and crimes at the state level, but also online. And then Charlottesville happened. And when Charlottesville happened, I thought we need a bigger project. I immediately reached out to Jonathan Greenblatt, who leaped into action and put me in touch with his team and Dupain Ghosh, where's he at somewhere in here, who's a lead technologist and together. We spent nearly a year and a half building this thing. I just want to tell you what the dashboard does. But before I tell you what it does, I want to tell you why we need the dashboard. We don't really understand communities of hate online. It's really easy to find bad actors, individuals who are bullying, harassing, doing a number of things online. And that's an enforcement problem. You can reach out to the tech companies, you can reach out to law enforcement, and they'll pull those down. But we don't understand the wider communities of hate. We don't understand how these communities recruit, how they build networks, how they raise money, how they organize, how they mobilize. And we certainly don't understand how hateful content spreads across the internet. So with that in mind, we built the dashboard to ensure that policy discussions are driven by scholarship and data, and not intuition and fear. So how do you build this kind of dashboard? We started with 40 seed accounts on Twitter, okay? We had a couple different kinds of criteria. One is that these 40 accounts regularly engage in hateful content against minority communities. That's a must. Two, that they're geo-located in the US, and three, that they're not bots. DePie and Ghosh then wrote a code to look at every single Twitter account that followed those 40 accounts. That then ballooned into a few million accounts. He then wrote another code to rank order them based on the number of accounts followed, of those million, how many are they following those 40? And so we got down to a few thousand accounts, and then we hand curated down to 1200. Why do we hand curated? Because there were some bots in there, and there were also folks at ADL and other advocacy groups that are following all 40 accounts. And so we're following 1200 accounts on Twitter. What does this let us know? We're not looking for the individual bad actor. We want to understand the ecosystem. This is ground truth. We want to understand what are the themes, the content, the tropes, and the narratives. And you can use this tool on Twitter to also vector in to the other platforms to get a sense of what's going on. So the dashboard itself monitors every single day in near real time hashtags. So what hashtags are popular? The topics. What topics are being discussed? What hyperlinks are being propagated? What tweets are being shared most frequently? And then the top sources as well. So you can check out the dashboard here at exploringhate.newamerica.org. And let me just end by saying, I'm up here giving a quick presentation on this, but an enormous amount of work went into this. And I want to thank Jonathan Greenblatt and Marie Slaughter, especially to Pine Ghosh for his enormous amount of effort, but also Adam Neufield. I don't know where he's at, but he was hugely involved in this. So we could not have done this without them. This is a first step. The next step we're hoping is to build a set of other dashboards, to look at other platforms, and to try and understand how hate spreads across the internet. But also to try and think about the bridges between online behavior and real world violence. So thank you very much. And I want to welcome now panel one to the stage. Thank you. For anybody who wants to stand and we have some open seats, please feel free to come up. Thank you again for joining us today. My name is Bobby McKenzie. I'm a director and senior fellow here at New America. I'm the principal investigator on the dashboard. Pine Ghosh is the lead technologist on the dashboard. And I want to welcome you to this first panel, Spaces of Hate. For those who are following online, please use the hashtag exploring online hate. I know that a number of you probably came to today's event with some questions in mind. No doubt you're going to hear some really smart things from this distinguished group panelist, but I would ask if you could hold your questions until the very end. And so let me begin by giving you a brief introduction of each panelist in alphabetical order. DePine Ghosh is a Schornstein fellow and co-director of the platform accountability project at Harvard's Kennedy School. He was previously here at New America as a fellow, was at Facebook before that, was at the White House. And again, is the lead technologist on the dashboard. Next to DePine's immediate right is Ambassador Karen Hornblew. She is a senior fellow and director of the Digital Innovation and Democracy Initiative at the German Marshall Fund. She is also a Mozilla fellow and she is the former ambassador to the organization for economic cooperation and development from 2009 to 2012. She's served in many senior positions and former administrations and she was the policy director for then President Obama. And I should mention that she was also formerly here at New America, so welcome back. To my immediate left is Cheryl Lienza. She's been working for over 20 years at the intersection of civil rights, technology and media. She was instrumental in the passage of the Local Community Radio Act and is a longtime practitioner before the Federal Communications Commission. She is here today on behalf of her client, the United Church of Christ's Venerable Media Justice Ministry. And lastly, we have Mary McCord, a close friend of New America who's here at many events and so Mary is a senior litigator and professor of law at Georgetown Laws Institute for Constitutional Advocacy and Protection. She's a former acting assistant attorney general for National Security at the Department of Justice and a longtime federal prosecutor. At DOJ, Mary supervised all terrorism related investigations, including those against those who committed their crimes on behalf of ISIS. She has since been involved in a number of important projects at Georgetown, which we'll get into. So let's just jump into this, Cheryl. We're talking about the platforms today, but these are not necessarily new issues. So maybe you could give us a sense of the history of hate in different media. Sure, well, thanks, Bobby. It's really great to be here today. Thanks, as you've mentioned, I'm here on behalf of the United Church of Christ and some people don't realize that the UCC has been working on technology rights and justice from back when cutting edge technology was broadcast television. So and the great thing about working for a faith tradition, and it was great to have Representative Clark sort of remind us of that power, is that when I work with the UCC, we can take the long view, take the very long view, work at the structural level to stop injustice, and also at the same time, we feel this real sense of moral urgency. And so when I was prepping, talking with Bobby before this event, one of the anecdotes I was mentioning was back in the 60s in Jackson, Mississippi, Medgar Evers was the national leader of the NAACP. He was in Jackson. There was incredible needs in that local community that were trying to end segregation in schools almost 10 years after Brown v. Board, and he couldn't get access to television. He campaigned, he campaigned, and it was a very long story which we won't get into, but finally, he gets a chance in May 1963. He gets 17 minutes on local broadcast television, incredibly eloquent speech, and unfortunately, less than a month later, he's assassinated, and his wife always thought was concerned that finally making it on television was part of what led to him at that time being killed. And so what's important about that anecdote is to remember that, I mean, we all know, hate has been here forever and media has played a role in it forever, and so what's the differences of what can we learn from that history? And I mean, the great thing, and we should not forget, that the great thing about the new technologies everybody has access, like you don't have to campaign for months and get, there was even an instance where Eleanor Roosevelt had to intervene so that somebody running for Congress in the same time period could get an advertisement, could purchase an advertisement. So we all have access, and that's great. But the question is how much is different now? We have algorithms that really do decide how many people you see when you are on Facebook or you are on Twitter, like what is your audience? So while anybody can get on, how many people can receive your message is really dependent upon, same, the technical structures. And we have a broader economic structure. We have the structure that, you know, we're really down to, there's like a triumvirate of Twitter, YouTube, Facebook that controls so much of the online conversation that they have become the new gatekeepers, and there's this ongoing question, what happened to Antitrust? Why do we have such a few number of companies? Why are there not more competitions so that you can go somewhere else if you need to? Unlike back then, we have the internet now. Shouldn't you be able to go anywhere? Why are you so dependent? And then, just as bad is, and what's still, I think relevant today, is corporate responsibility. We had to hold those companies responsible back then. We have to hold them. We've always had to hold them responsible. They have an obligation, and that's one of the reasons, in terms of this conversation and what are the solutions, the United Church of Christ Media Justice Ministry is very proud to be part of the Change the Terms Initiative, which is an initiative that supports a set of model terms of service that we recommend all companies adopt that cover some of the core issues that would move to better implementation, better treatment of hate online. So, they include, I'll just briefly touch on them, well obviously we're gonna get into all these issues much more in the panel, but, fair enforcement across the board, which I think is one of the most important things as we discuss, you know, can these online platforms, unfortunately they don't seem to be able to tell the difference between terrorism and hate, cloaking itself in the faith of Islam versus terrorism and hate cloaking itself in the faith of Christianity. It can't tell them apart. Why is this not equal among people of color, others? Transparency, the right to appeal, if you are unfairly targeted, how do you get the situation fixed? And then the two other really important parts are integration of these matters as the products are being developed, training of their staff, much more sophistication on the part of the companies, and corporate accountability and governance at the governance level. So that's just an overview of what Change the Terms you can check it out at changetheterms.org. We'll obviously talk about that more, but sort of trying to put in very brief an overview of that long history and I'm really, I guess the last thing I'll just say is, you know, the First Amendment is bandied about a lot and I hope we'll get into it a little bit, but the First Amendment has really evolved according to technology over time and sometimes an overly simplistic view of the First Amendment, even in this area of private speech is maybe not the right way to think about it, so I'll let it there. Karen, you've started a new initiative looking at the here and now and the dashboard, the dashboards, the social media platforms are accelerants for a lot of bad things. Could you maybe just give us a sense of what you're doing and what you're seeing right now? Great, thanks, and thanks so much for convening us all and for this important initiative and I'm really happy to be back at New America. You know, from my perspective, I just have to agree with the Congresswoman, I think that hate speech and more specifically the white supremacy that we're seeing is an emergency and you know, whether that's the Christchurch shooter previewing on 8chan and then offering to share a link, live streaming him killing the Muslims, 8chan users afterwards, decorating the alleged killer as a saint, encouraging others to commit shootings, then the white supremacists to open fire at the Shabbat of Poway also previewing his plans on 8chan. We've heard in the Russian interference in the election in 2016 that fear and hatred of immigrants was exploited, of racial minorities was exploited, African Americans specifically. We've seen in countries around the world that racial minorities have been targets of violence organized through the spread of disinformation online, so it's a real emergency so I think it's absolutely terrific that you're focusing on it and what pains me as somebody who's worked on technology policies, specifically internet policies since the internet was referred to as the information super highway. I am sort of chagrined about how the US stepped back from leading in this area and I have some regrets about that that we stepped back a while ago and it's really urgent now and we're still not leading. I think it should be obvious that we need to lead and what's happening in the absence of this leadership is that a lot of countries are stepping in just doing either out of desperation or sometimes because it's convenient doing things to shut down the internet, to limit access to the internet and then a bunch of countries are passing laws to require the platforms to take down content in a short amount of time or they'll be penalized and what I just wanna mention is that I think we have to focus not just on the policy, I think this transparency is terrific that you're doing but the policy has to focus not just on the content that we see online but on what's behind it, on the systems, on the design of the systems that exacerbate that amplify the hate and so I just wanna talk about that for just a minute and the danger with you if you focus just on that on the piece of content is you wind up in this whack-a-mole so it's not very effective but also you get into all kinds of first amendment even when you don't have the first amendment in other countries free expression concerns you're giving the platforms enormous amounts of power they become arbiters of who's online they're not doing it as you said with sufficient transparency and accountability and appeals process so it's not that we don't do anything about the individual piece of content but let's talk first about what needs to happen in terms of the design so the problem with the design as also as was mentioned is that the platforms are not neutral pipes they curate what you see online if it's on Facebook it's the news feed that's determined if it's on YouTube it's the recommendations and so on what you're seeing is curated to begin with and the algorithm that curates that is optimized to keep you online and it turns out as human animals what keeps you engaged, what gets you to share it and like it and so on and comment on it is outrage, conspiracy and group tribalism and so on so the algorithm is optimized in some ways for exactly the kind of thing we don't wanna see plus there are all these tools available online that allow influencers of disinformation influence campaigns that wanna promote disinformation or conspiracy theories ways to sort of launder their funding and their identity so whether it's bots or trolls or click farms and so on so it manufactures consensus in a sense that the whole group thinks this and what bothers me so much about this is that the internet's foundational values for transparency and decentralization and user control and what we have right now is anything but that so I think that what we wanna do when we think about policy is think about restoring some of that transparency restoring the user control and accountability all within a context of privacy and the First Amendment because otherwise we wind up with these turnkey solutions that allow foreign actors or any bad actors to spread disinformation and conspiracy theories so we'll get into a laundry list of policy ideas but you can imagine things like labeling bots and add transparency well beyond even the great honest ads act that's been put forward the ability to shape your own algorithm and so on so we can go through these later but I think what we need to explore are a lot of approaches that restore the transparency the user control the privacy that we should have had in the first place and if we can get the system design right I think we can unpack a lot of the conspiracy mongering and the hate speech that we're seeing and then we have to go after some of these networks and go after these bad actors directly and I know we'll talk about that more. Dupain you've spent a lot of time researching and writing about the business model of these companies and the algorithms and so just maybe you could give us a sense of what you're seeing. Sure well first of all thank you thank you Bobby for convening us all this is a really important conversation obviously and we have to do something to address these problems that are really bubbling up online. I think Karen put this really eloquently there is a set of negative externalities that has really emerged as a result of the internet the commercialization of the internet over the past 15 or 20 years and as Cheryl said there are a few companies that are very representative of what the consumer internet means in America today that includes Facebook and Google but I think the point that I would like to draw out of this is that these negative externalities are a result of the business model that sits at the core of the consumer internet so what do I mean by those terms? Negative externalities of course online hate the spread of hate speech online and hateful conduct that results from that online speech is maybe exhibit A given what we've seen in Sri Lanka and New Zealand and in other very recent incidents the disinformation problem that Karen discussed with respect to Russia and the US and in other instances the systemic growth of algorithmic bias through the commercialization of algorithms online these are all examples of negative externalities and there are many more what do I mean by that? These are things that the public doesn't want these are things that consumers don't want to be subject to but which are externalities in the sense that they're generated by the consumer internet industry unintentionally they're an organic outgrowth that's unintended and which nobody thus far is accountable for and responsible for and when I talk about the consumer internet what I really mean here are certainly the companies that Cheryl mentioned Twitter, Facebook, Google but more broadly speaking it's a business model that's really whether you look at those three companies that are part and parcel of social media today or kind of narrow aspects of Walmart's business online or Amazon's business online the consumer internet what connects all of these practices is a dialogical interface with consumers that's really focused as Karen put eloquently the engagement of the user on platforms a few platforms that psychologists have said are borderline addictive and which are overtaking the consumer internet in America to the extent that there are only a few players today if for example if we look at social media or internet based text messaging or online video sharing or search or email or e-commerce each of those silos of the consumer internet industry is controlled by 95% control by one company whether it's Amazon, Facebook or Google. Second is the uninhibited collection of information on users through those platforms all toward creating what are called in the industry behavioral advertising profiles and third is the creation of these algorithms that is Karen put essentially curate content and target ads that's the business model that's consistent across this industry that we're talking about yes for Twitter, Facebook and Google that is the main piece of their business model but various companies that also operate on the internet in America exhibit that and practice it too whether it's Amazon or Microsoft or Walmart. It's that business model that really encourages this engagement toward content that anybody might want to see that engages the individual's social need and it's essentially this free market this radically free market has enabled Nike to advertise sneakers or Russian disinformation operators to perpetuate the theory that Hilary dedicated her college thesis to Lucifer. It's a free market of ideas and I think it's that business it's that commercial regime that's forced these negative externalities on the public. What do we do about it? Well I think to keep a long story short we're gonna need to see governmental intervention not necessarily to break that business model down but certainly address it at the margins and maybe even a little bit more than the margins to really contain what I would call capitalistic overreaches of a free market that is implicated to public interests. America has always put democracy and fairness over the interests of the markets and I think that in this instance as well it's high time for governmental intervention to really take care of the people. Mary, you've worked on these issues quite extensively but you also were at DOJ where you were prosecuting cases of ISIS members. Could you give us a sense of what's different between the network of ISIS members you saw online and also on the ground but what you're seeing now with far right nationalists, white nationalists? Again I'd like others thank you for putting this together Bobby it's my pleasure to be here and I think rather than talk about what's different I think I'm gonna focus on what's the same because that's what's been the most significant to me. After 20 years as a prosecutor, a federal prosecutor in DC I went over to the main Department of Justice in the National Security Division in May of 2014 which was one month before the Islamic State Caliphate was declared and that summer as many of you may recall was a summer of horrible hostage takings and beheadings of Americans and others and that began a very intense period of US counter-terrorism efforts through 2014, 2015, 2016 which continues today but has really dissipated as the Islamic State has lost so much territory and what we saw in common in the summer of 2015 I think was our busiest year of terrorism prosecutions at the Department of Justice since 9-11 since the years immediately after 9-11 and the thing that every single case had in common was social media so whether it was a case of people here in the United States who started being drawn into radical Islamist extremist ideology over social media and then even some of them attempted to travel to Syria to join the fight others sent resources, money, equipment, et cetera whether it was actually people in country in Syria and Iraq who were engaging in terrorist attacks or in Southeast Asia or in Western Europe on behalf of ISIS what we saw in the law enforcement community is that all of them were in some way either radicalized over social media using social media to recruit others or using it to propound their own views and to encourage others to commit acts of terrorism on behalf of ISIS this was different from what we saw with al-Qaeda just simply because of the development of the technology and the proliferation of social media and its use all over the world in the years since al-Qaeda was at its highest mark and when ISIS was declared so what I've seen since I stepped away from government and was even starting a course when I was in government is that these same exact social media tools whether it's to recruit or to propagate ideology, far-rightist extremist ideology or to just simply promote acts of terrorism are the same tools we saw with you being used by ISIS the same tools being used by foreign terrorist organizations my organization at Georgetown which I helped found the summer of 2017 after I left the government immediately after the Unite the Right Rally in Charlottesville, Virginia undertook bringing litigation there against a couple of dozen of the white supremacist and militia groups that had invaded the town and we brought our case using state anti-private paramilitary laws and we ended up successfully last summer just a couple of weeks before the one year anniversary of Unite the Right we ended up obtaining court orders against 23 different individuals and organizations including Vanguard America the traditionalist worker party League of the South the National Socialist Movement which is the American Nazi party preventing them from returning to Charlottesville engaged in that armed coordinated use of force that paramilitary activity but in our research in putting that case together we relied on all kinds of frankly postings on social media so we relied on videos that were posted we relied on audios that were posted we relied on the chats and the conversations between the white supremacist and alt-right and neo-Nazi and neo-confederate groups in organizing and planning to go to the rally they used Discord they used Facebook private channels they used all kinds of social media and direct messaging platforms to encourage each other not just to come but very specifically to come armed to build shields to organize with shield walls that could be used like ancient Greek phalanxes in order to create offensive weapons to batter the counter protesters and they said there what they were going to do is they were gonna provoke the counter protesters to sort of take the first swing and then they could react fourfold, tenfold violently and invoke self-defense that was their scheme, that was their plot and of course we saw that unfold in Charlottesville in August of 2017 and we saw that culminate in the absolute horrendous terrorist attack of James Fields when he used his vehicle taking another page right out of the International Terrorist Playbook after we saw that in countries all around Western Europe Southeast Asia and other places using vehicles as a means of terrorist attack and we saw that in Charlottesville so I think learning more about how social media is used learning more about the algorithm having people in this country understand how this gets sucked into the very worst darkest places actually and how it provides that platform for hate I mean I know some of the good people at some of the social media companies they never intended their platforms to be used this way but they're being used this way and by the way these are private companies they don't actually have to comply with the First Amendment the government has to comply with the First Amendment we all share the values of the First Amendment I think we all want to see social media promote those values but they are not technically bound by the First Amendment let me just follow up with a question here the congresswoman said that this is an emergency Karen, act with that but you talked about all the similarities between the ISIS members and what you saw online in fact after the rise of ISIS in the summer of 2014 I think it was like 80,000 ISIS affiliates on Twitter we dealt with that as if it were an emergency we're not dealing with the rise of the right as if it's an emergency so I guess my question to you as a former litigator what's the way forward? So I actually think it's time for the federal government to treat counter-terrorism as counter-terrorism across all ideologies as Anne-Marie pointed out in her introduction counter-terrorism has traditionally meant and particularly post 9-11 to people in law enforcement in the intelligence community has traditionally meant radical Islamist extremism and countering that but what we've seen now is that particularly here in the United States the lethality of far right extremist violence is actually greater than from Islamist extremist expiring violence here in the homeland of course not including 9-11 and so unfortunately our entire counter-terrorism program was not developed to address this threat and some of that needs to be focused on for one thing we do not currently have a federal crime that applies to domestic ideology inspired terrorist acts things like murder, kidnapping, assault with a dangerous weapon, assault with causing significant bodily injury there are terrorism statues that do apply sometimes domestically certainly if you are providing material support to a foreign terrorist organization even if you're doing it here domestically in the United States that's an act of terrorism it's international terrorism even though you're doing it domestically also if you use a weapon of mass destruction or radiological or dispersal device or a biological dispersal device where you attack a U.S. government official those acts can be prosecuted as terrorism but the vast majority of what we see in terms of attacks in the United States are committed with firearms, AK-47s, AR-15s, side arms or with vehicles and there is no terrorism offense in our code that calls that terrorism and prosecutes it as terrorism you might be able to prosecute it as a hate crime we're seeing that with the Tree of Life synagogue we saw that with Dylan Roof we've seen that with James Fields and Charlottesville you certainly can prosecute it under state law, murder and other charges but you can't call it what it is which is terrorism which means that the regime of the FBI that's been successful and it's controversial and we can talk about that but their entire regime for preventing terrorist attacks is built on paying attention not through electronic surveillance but just publicly what are people posting who is out there talking on social media open public social media about their allegiance for ISIS to ISIS who's talking about their allegiance to white supremacist values and what the FBI has done really successfully over the years or at least with some good success is when they see that type of posting they use online personas, undercover online personas to go engage with those people they are then able sometimes and these are sting operations that's what these are we use sting operations in the US government to go after terrorists we use them to go after child sexual exploitation we use them to go after transnational drug rings it's not a new technique but it really hasn't been used to go after far right extremists because there isn't at least in one respect because there hasn't been that mandate because there isn't a federal crime that applies to it and Congress and the highest levels of government haven't called for it and they haven't put the resources to it I did just want to follow up with that I really appreciate as I know that that's one of the concerns that we all share jointly is trying to have trying to be sure that all of these policies whether they're an online social media policy whether they're federal crimes is to treat the crimes for and against the same people the same, different people the same way I didn't want to flag that I know Mary's working very hard on that that there are some there's a really strong theory that says that we really should just use the hate crimes statutes and that those are the ways to go about it because of the concern you know we always have it which is well now if it gets called terrorism then who becomes terrorist under a new regime is it now become the new idea there's black terrorists and that's being created I think falsely about that so I think we're kind of working towards a common goal but I did want to flag that I think there's a really good debate about whether we need a new federal crime or whether the federal crime statutes we have are sufficient but they need to be implemented more appropriately A question for both Karen and DePyne you know Karen just circling back on the issue of emergency for all the reasons that Mary cited I mean we have a very different approach what is going to be the tipping point before the tech companies look at their business models to try and address these problems because clearly they're on top of ISIS they're on top of child safety issues but we are far, far, far from being able to address far right hate online Yeah so I mean I do think that if we can have more discussions like this and if we can have more tools like what you guys are putting up more efforts like what we're trying to do with German Marshall Fund I do think it helps civil society, public, media to understand and to question and to ask in specific terms of the platforms what they need to do so that's thing one and then thing two is to ask policymakers to take action whether it's what Mary was talking about or it's some of the changes that would force some transparency some of the bills that Mark Warner has put forward the transparency in the ads people have been talking about transparency in bots he has a great bill that sort of got under the radar screen a bipartisan bill that would when the platforms are segmenting folks to figure out what content really moves them or what will get people to be engaged that that be considered an experiment and they have to get consent so there are a bunch of policy ideas out there that I think are really really creative but I think we all need to get smarter and because I think what happens is we get told often by the platforms that this stuff is so hard we could never do it at scale and that everything involves a huge diminution of free speech and I think there are some things that we can do and this is why I emphasize that more transparency, more user control greater respect for privacy these kinds of things do not impinge on free expression quite the opposite and I think we need to think about things like that but then there are these networks these malicious networks that we have to take care of and really go after I think we can do a lot by getting rid of the tools that they use but at some point the platforms need to take them really seriously and we've seen that they've done it when they've been pushed by law enforcement until Christchurch I believe Facebook considered white supremacy of violation of their terms of service but not white nationalism and even afterwards we haven't seen the kind of crackdown now there are a lot of concerns about as you say false positives but I think we all need to become a lot smarter and get very specific in what we'd like to see and understanding also reporting who's been taken down who made the decision to take it down what's the appeals process that kind of transparency as well I think Karen is absolutely right in what she suggests and I just want to step back for a second and say that when we think about Mark Zuckerberg or Sheryl Sandberg or the executives at any internet company and they're calculus in thinking about these kinds of really difficult policy questions it is unnatural for somebody like Mark Zuckerberg to actually think about well what's in the public's interest and how can I redesign my platform in a way that can actually reflect the public interest because the moment he does that he takes a step down from the pinnacle of competition and the moment he does that his rivals will take his position and so in other words it's unnatural for us to expect that a chief executive will take a voluntary action in the interests of the public no, instead these executives have to do what is in their own commercial interest on behalf of their shareholders to remain competitive and that's exactly the narrative that's the story of the consumer internet over the past 15 years so you're saying short of regulations we're not gonna see change on these issues I think that's right I think we're gonna have to see governmental intervention that rides this wave of public sentiment and without that nothing is really gonna change and if we abstract this out another level we've talked a little bit about two types of regulation on the panel so far we've talked about content moderation and we've talked about economic regulation and I think there's utility in either approach but content moderation can only address the here and now it can only address well any election season let's make sure the Russians are not infiltrating our political system in the immediate aftermath of a particular attack let's make sure that live video doesn't get reshared that's obviously an important practice it's an important exercise but it can't address the way in the long run it will not address the way that the business model underlying the consumer internet implicates the public creates these negative externalities and so I think while we're having this discussion about well how should we reframe free speech how should we make sure that these platforms contain hate speech and disinformation and discrimination I think we also have to have this deeper conversation about well what are the capitalistic overreaches of this business model and how can we start to have a earnest meaningful conversation at the federal level in the US about privacy and competition and transparency we're gonna open up to the audience for questions we've got someone with a microphone if you could state your name and your affiliation please hello hello I'll just start talking and then the magic will happen Mark McCarthy with Georgetown University this is a great panel thank you so much for holding this Karen DePine you're really doing great work in this area I wanna focus on DePine your excellent point about the externalities that exist in this area because you do have a standard externality just like you have in the environmental law area but the difference is that you're dealing with speech and the normal mechanisms you'd use which is direct regulation don't work very well because they involve content regulation and most of the speech in this area is legal speech hate speech is legal in the United States and so the measures you're talking about go at the problem in a different way so let's change the business model you suggest but I'm not sure that goes at the problem either because as you point out the reason the companies are behaving this way is precisely because of competitive pressure so if you have now economic reform to increase competitive pressure you'll increase exactly the problems that are causing them can not be able to take the steps to moderate content effectively so we're in a bind if you do the content regulation directly you have free speech problems if you do the economic regulation you might even make the problem worse let's take a few questions at once and then we can ask the panelists to respond hello Will Farajara with strategy for humanity consultancy I guess I'm struck taking a step back at this I'm wondering if a compliment might be a focus on local media and I'm thinking that this will have an educational aspect to it the dashboard itself but I'm struck by the fact that Timothy McVeigh blew up a federal building in 1995 this white extremism has been with us for many years and I think part of the problem with our conceptual problem with this is understanding fully what's going on in our communities and so, in my view, a compliment to this is a refocus on local media and even regional media and national media which is shrunk, we know this to really even know what our neighbors are doing thinking their grievances and so on so I just throw that as a provocation thank you gentlemen, thank you thank you very much my name is Kasar Sharif I'm with the American Muslim Institution with whom you're working, Mr. McKenzie I just have a quick point and I'd like your comment on it I think somebody here said in the panel that private companies are not required to be subject to the First Amendment things and it seems to me that First Amendment is used too many times as a foil Facebook, when they knew that things were being done on Facebook to actually commit genocide and ethnic cleansing in Myanmar kept saying that while it's free speech we can't do anything about it it just doesn't make any sense one more question then we'll have the panelists Hi, I'm Bob Reed with Peace Through Action, USA at the beginning, Jonathan described this as a battle on an online battlefield so that seems to me we need more online soldiers and soldiers for good and as a peace organization I really hate even using the word soldier but so peace agents for good so I'm really curious about what we could do and kind of also building on this gentleman's comment about local what can we do in this realm to power citizens to be in these spaces and shut it down by overwhelming overwhelming the speech with positive speech or taking down each other's negative speech or whatever because we're not talking about the dark web here Twitter's the light web I have a Twitter account this isn't like all the secret stuff that you gotta be I don't wanna get into 8chan this is all happening out in wide open forums so it seems to me there's some potential and I just don't know how many human beings we need to mobilize to kind of overpower the hate speeches that you're all documenting through the dashboard to kind of get a sense of the scale of how big this peace force needs to be online right, thank you sure, well I mean I just wanted to flag a couple of things that were most meaningful to me so obviously local media the structure of the media I think that's one of the things we're talking about here when we talk about the economic structure and you have a structure you know it's a similar idea in the environmental space where you start to get a model culture and you lose diversity and then one invasive species can come in and take over right away that's what we're sort of seeing in the world of Facebook and Twitter where you get one invasive species and that model is you know it's a single kind of algorithmic space and it can be immediately infiltrated whereas you have multiple different sources of information and news then you have many more places that you have to go before you know some sort of conspiracy theory is it becomes the corner of the realm so and I mean there are studies now showing there are thousands of counties in this country that don't even have a local newspaper at all and the consolidation in broadcast television is just getting worse and worse enabled by this administration so that's absolutely right and I think one of the things you know newspapers have been trying to figure out like how are we going to create a new economic structure in this realm when advertising has gone the way of the monopolists online how do we create these separate sources of media and there's a lot of creative thought out there and no solutions and we'll definitely can refer to some other great work at New America and elsewhere on that and then in terms of the peacemakers I mean I think in a minimum on an individual level you have to pressure the companies I mean I would just plug the change the term petition to be accountable and then you got to speak up and defend other folks I think the bots and the scale of the hate is quite alarming and I'm not sure how we would do a peacekeepers brigade online but at an individual level absolutely do not let the person of color the woman the Muslim person defend themselves online on their own be a source of support report the stuff that you see because if we don't all at least do that then you know we're not even availing ourselves of the tools that we have now so I would definitely create some agency for yourself on those points I think another comment that is coming through as a through line and several of those comments is sort of like this isn't new hate's been around unfortunately for since mankind's been around and but really what is causing people to go from you know having some thoughts having some hateful thoughts to radicalization toward violence right and we saw it we see it with people who end up getting involved with foreign terrorist organizations there is a process right where they start to become interested in looking at certain content online they get more and more radicalized and then at some point their radicalization goes beyond just writing things saying things talking and into violence and whether it's foreign terrorist violence Islamist extremist violence white supremacist violence or any other extremist violence extremist violence is wrong and we have to sort of figure out are there things that can be done at the outset before someone goes down that path and I'll tell you when it came to ISIS again back a few years ago at DOJ and within the interagency of the government Department of Homeland Security Department of State and others we brought lots of people in a room lots of times to try to figure out what can we do to try to address this problem before it gets to the point of violence we even did something that we called Madison Valleywood where we brought Madison Avenue advertisers we brought Hollywood producers we brought Silicon Valley tech sector people all together on a day and spent the entire day kind of at tables brainstorming like ways to try to get at this and some of the ideas were sort of what's the counter-programming not counter-programming that's gonna be like we're countering hate but more like let's promote like the great things that Muslim Americans do like let's promote like great things that are happening to try to show that there are other paths to find happiness and belonging than going toward a foreign terrorist organization or a white supremacist organization because many of the people who we see who are drawn into that culture are people who are missing something in their lives, I'm not a psychologist so I can't really get much further than that but there are people who need to feel that there's something bigger than themselves and whether they find refuge in Anwar al-Laki's sermons or whether they find refuge in Richard Spencer's blathering one way or the other they're feeling like they need something and they're searching for it and we so far as a community as a nation haven't been able to address that and I think the local roots to that is at least a place to invest some effort. The thing that I was trying to talk about when I was talking about these synthetic tools that are online and the algorithms that reward outrageous that it's not really a fair fight right now. So Mark, when you say there's a First Amendment problem or the economic solution's gonna make things worse, there's something in between just pure economic solution or going First Amendment which is changing some of these design tricks that people can use, this turnkey system that they can use to manufacture consensus and sell a narrative. There's just too many options for deceit and the armies of hate just have an advantage they shouldn't have. It shouldn't be so easy to buy bot armies. It shouldn't be so easy to disguise who you're advertising to and micro-target people and expert AB test content that'll get people upset. The algorithm shouldn't reward and I know the platforms are working on this but it can't just be that engagement is the North Star as it's been. To your armies of hate and agents of peace, I'm gonna keep that in my head but it's not a fair fight right now. It's really not but the second thing I wanted to say is that I think this idea of local news is incredibly important and in the early days of broadcast that was seen as really a big worry and as a result the broadcasters under the public interest standard which is a carve out in a way of the First Amendment were required to have localism and they all had news and news followed certain journalistic. We have to think what are we gonna do? We don't have the public interest standard in that carve out in the First Amendment but what are we gonna do to help fund local journalism whose revenue model has been undermined by social media and then to support it and get readership for it and to make people understand old fashioned journalistic standards which after all were a lot about transparency, right? The masthead, the separation of news from opinion. So how do we fund local news? We've gotta do that. We had PBS and CPV. What are we gonna do that's like that? And I think of that back to your agents of peace. What's our civic architecture? How do we build a civic architecture? But then first we have to make it a fair fight and we have to think about that not just what's wrong with today but what do we wanna see tomorrow? So I commend those ideas. I'll be brief. I think in response to Mark, I think representative Ted Liu has said on this stage that section 230 of the CDA is important but I think it's actually time to start rethinking that idea. That's to say we don't wanna split that apart entirely. Can you tell people what that is? It essentially gives internet companies protection from speech. They're considered platforms that convey ideas and money and so on. And they don't have responsibility for what. It's a date to frame somebody on the platform. It's not the platform's fault. Exactly. So I think it's time to rethink. Platforms already take down certain kinds of content on a voluntary basis or because it's illegal to engage in sharing of that kind of content. We all know what kind of content it is. I think it's time to start bringing those kinds of paradigms to other areas as well that have clearly shown to disrupt democracy and many other public interests. I think regarding public service journalism and local news, I think it's time to take lessons from other countries that have devoted public resources to local news, to journalism, and start to rethink how we're gonna start to encourage people to connect to local ideas and local news. Karen puts it really well, so I won't go further than that. And finally, I think regarding how we can start to create these soldiers that are gonna bring about peace. I heard a really interesting idea recently. I participate on a faculty working group at Harvard led by Ash Carter, Secretary Ash Carter who is at the Kennedy School now. And a really interesting idea came up in those conversations that, well, perhaps we need to really encourage people to shift the social norm so that when you and your interactions online or in real life see an example of hate speech or disinformation, you reject it and you reject people who share it as well. And that's not necessarily gonna travel across the network in a way that limits these kinds of negative externalities altogether, but it can help charge a conversation about how things need to change and help this governmental intervention that I think is both necessary and inevitable. Let's take, we have two more questions and that's gonna be that, then we're gonna have to call it. Thank you. Thank you. Natalie Maryshia from the Ranking Digital Rights Project here at New America. DePionne, I really appreciated what you said about the need to regulate the business model whose negative externalities we're dealing with right now. Do you see comprehensive federal privacy regulation or legislation being part of that solution? And if so, could you expand a bit on what you would like to see in such a bill? Let's take the final question now. Great. My question is going to be rather pessimistic of the E. If you could state your name and affiliation. Oh, Ilhan Kagri from the Muslim Public Affairs Council. Thank you. I have a rather pessimistic question and I'm hoping you're gonna give me an optimistic response. And that is that, you know, a lot of what you were talking about required sort of a marriage and cooperation between the public sector and of course the government and particularly the federal government. And a lot of the problems that we're seeing I think are coming down from not just lack of leadership but a leadership that is actually anti-press and calling the media the enemy of the people. And you also have a large number in Congress that are also against, you know, working against these, you know, purveyors of hate and these white supremacists and also these attacks on the media. So given that, you know, and it, you know, it might actually continue through 2020 and beyond. Given that, how can, you know, what hope is there to really have any kind of partnerships between, you know, sort of the private sector and government or the public sector? If we can keep our responses short and so we can wrap up here. Sure. Natalie, regarding the privacy question, I think just for context, there's this California privacy law that passed last year and it's pretty stringent. It's the most stringent standard in the United States at any level. Is that obviously at the state level in California and various people have described it in various ways. One very strong privacy advocate that I know describes it as going 30% as far as GDPR goes even though it takes a GDPR like approach. So it's already watered down from the European approach and additionally, I think that is triggering a federal conversation because companies like Facebook and Google want to preempt this law. So that creates this engine for advocacy here in DC for a federal privacy law because as soon as the industry is pushing for it, of course, there's a little bit more political incentive for things to happen in the area and Democrats and Republicans can tend to come together because advocates are caring about this issue too. But at the same time, I think connecting this issue of privacy to these negative externalities is really important and it is this uninhibited collection of data that really enables Russian disinformation operators and hate speech purveyors to target the thin cracks in American society, identify them and target them and pound information and content at them until our social fabric breaks. So it's privacy that can protect us from that and what I would love to see in a federal proposal would certainly be control and respect for the consumer so that the consumer can really own and dictate where his or her information goes and that should be the guiding principle, I think, in federal law. I just wanted to also flag, I would agree, in terms of the importance of privacy and the Lawyers' Committee on Civil Rights Under Law that I think on the next panel and Free Press collaborated with a model bill that talks about the intersection of civil rights and privacy and I think that would be a good place to start in terms of some of the specifics that might be included in federal privacy legislation that would get at some of this discrimination set of issues. Can I just take the pessimistic question? So I too am pessimistic a lot of the time but I really think that there are a lot of people on the platforms that really want to figure this out and do the right thing and I think it's up to all of us to figure out what that is, you know, we can solve this, we can help figure out how to make it, you know, harder for recruiters and radicalizers and purveyors of hate speech to operate online and work with the platforms to figure out how to implement policies internally that are transparent and have as few, you know, negative consequences as possible and I think it's up to all of us to do that and I think if we talk in policy terms, that's why I still talk in policy terms, I think it helps folks understand that it comports with how we've regulated things in the past and how we've managed and had public conversations about things in the past and that there are ways to do it within the context of the law but we may not get laws for a while but I do think that if we're clear enough about it and find enough allies and work with the platforms that we can make a great deal of progress but I think we all need to get clear and better educated which is why I think forms like this are so important and also just talking to people who see different parts of this elephant. I'm learning a lot just being up here on this panel. Real quick, I know we need to wrap. I've been sitting here trying to think of something optimistic to say. To say. So unfortunately, we have a DARTA leadership. I mean, let's just be clear, our commander in chief uses Twitter to bully and harass people and that's just the fact and it doesn't really matter what side of the political spectrum you are and it's from my perspective, that's what he does so it's hard. Right now I think we have a problem where there are too few leaders in government standing up and making powerful statements in this area and so the public sector and the private sector, public sector being people doing public interest work who are not necessarily part of the government and the private sector are really gonna have to step in and sort of fill those shoes these couple of years and push us through this and keep really taking on the role I think that we would like to be able to look to leadership for and hopefully we'll get some normalcy sometime soon. And with just two points that's worth noting here is that I presume everybody in this room is not online looking for anti-Semitic content or Islamophobic content but it's out there. I mean, the reason we're doing this research, the reason Karen and everybody in this panel does the work they do is because there are bad actors out there that are propagating this and we wanna try and better understand it. I just wanna touch on something also that Karen mentioned is the tech companies. I think that there are folks inside of that really are trying to wrap their heads around these issues and so the whole purpose of this research is to try and inform that conversation with scholarship and data. So I wanna be hopeful that we can find a better way once upon a time. Child safety online was a super hard problem. We now have it pretty well handled. It's not perfect but it's not what it once was and I'm confident that we can get there with hate as well but I just wanna thank our panelists. I wanna thank all of you. We have lunch coming in one minute but before we break for lunch and the second panel which Adam Neufeld has told me is gonna be the best panel we've ever had it in America. I just wanna please join me and thanking our fantastic panel. Hi everyone, there's two lines for lunch, same thing on each side of the staircase and we're gonna get started back here at 1250 so you're able to sit here and eat. Thanks for the quick turnaround and your patience in advance, thanks. Let's get started while we solve some of the technical difficulties. I'm Adam Neufeld and the Vice President of the Education Strategy at DEL. It's a really good role here and my guess is we will, I don't think we need to award even Bobby before that no question to the end. Since everyone's eating, so I feel like that helps prevent some of that. So before I introduce the panel and we start off the questions, I wanted to first set the stage a little bit, which is ADL recently put out a survey of over a thousand Americans about their experience online, right? We didn't go try to find people who had had terrible experiences online, we had just, thank you so much, we just went out and did this survey because we found that there wasn't actually a lot of information about how much harassment there was out there. There was a Pew Research Center study about two years ago that focused on a narrow area. So we asked and what we found was really shocking even for us where we are getting bombarded all the time with complaints about a personal experience. We found that 53% of people had experienced some form of harassment and that even when we took the narrower subset of severe harassment, which is there a few rows down, 37%, and that includes things like physical threats, stalking, just, you know, not sort of, not stuff you would just say, eh, turn off your computer, right, as Representative Clark mentioned. It was 37%, that was almost 20 percentage points higher than two years ago when Pew asked the exact same question. So why are people getting harassed? So a lot of it, if you could focus on the left-hand chart, or a 32% of it is because of some protected class. So that means two thirds of it or so is individual and might be, you know, things that we find no good, but they're not sort of legally protected classes. But you do see a lot, right? 20% gender, a lot of physical appearance, political views, race, religion, and so on. Then we took it from a different perspective, which is what if we focus on people who are, you know, have a membership in at least one of the protected classes. What's their experience like online, right? And so there is where you really see some of the impact, right, that 63% of people who identify as LGBTQ have been harassed because of their sexual orientation online, right? 35% of Muslims who have been, who are online have been harassed because of their religion. And, you know, as Representative Clark mentioned before, right, there's a temptation for people to just say, like, turn off your computer. This isn't that big of a deal. I'm really sorry, but it's just a slower move on. And what we're saying here is that the impact is really quite huge. So this is of, when we ask people who had been harassed, so about the half of the Americans online, 38% had changed their online activity in some way. This could mean anything changing their privacy settings to no longer posting to getting off of websites. 18% had contacted a platform which isn't some easy, fun thing to do. You fill out a form, you provide a lot of information. 15% of the people who have been harassed took some steps to reduce their risk to fiscal safety, things like changing their commuting patterns and so on. And 6% contacted the police. 6% may seem small, right? But that is really huge, right? What you have to do to get debt to the police and to do that is a real statement of the fear that is having on you. And we also ask questions about things like depressive thoughts, ability, distraction, and found really high rates there as well. And so this is where the silver lining is, which is people actually want platforms to do a lot more. And it didn't matter, all these percentages, 60, 70, 80%, things like adding options to filter content, making reporting easier, removing problematic users, labeling bots. And we actually didn't see any difference or very little difference whether someone had been harassed or hadn't been harassed, whether someone was liberal or conservative. The majorities want action here. So this is a little bit of a scene setting, but now I wanna have the conversation, right? So we are thrilled to have just a great set of panelists here who have been thinking about this, living this in various ways. And the whole purpose of this panel is, well, we talk a lot about the speakers of hate, right? The extremists, the haters, the spreader of misinformation. We don't actually talk that much about the targets and the vulnerable populations online. And when you flip that, some of the issues are the same, but some of the issues I think are different. So we have a handful of folks here. So Arusha Gordon is the co-interim director of the Stop Hate Project at the Lawyers' Committee for Civil Rights Underlaw. Next to her, Nora Benavidez is the director of US Free Expression at Penn America, a group that is working to help protect journalists. Next to her, we have Shahid Rahman, the executive director of American Muslim Institution. And finally, Francella Ochoa, the vice president of policy and general counsel at the National Hispanic Media Coalition. So I'm gonna open it up here, which is, before we get into solutions and all the rest, what is the experience of vulnerable populations online? What do you, do these numbers feel crazy to you? What is the personal impact? What is the personal experience? What are the trends you're seeing? Nora, do you wanna start maybe? You take it a little different, right? We didn't ask about journalists. We know that there's a lot of targeting there. What are you seeing there? Well, first of all, it's great to be here. Thank you so much, Adam, and to ADL. Penn America is a membership organization that was founded almost 100 years ago at the intersection of literature and human rights. And we occupy this interesting space, I think, where I very well would have loved to have also been able to talk about some of the issues from the previous panel, because I think there's this tension between free expression and what we know is happening online. But as a membership organization, one of the things that has happened over the last few years for Penn America is we've witnessed our own members, writers and journalists coming to us, with deep concern about the kinds of harassment that they are experiencing online. Anywhere from doxing to message bombing to non-consensual pornography, to other kinds of abuse, hateful speech directed at them for any number of things. And what's interesting is that that has gone up and up and up. And that inherent tension of what we know exists here, we somehow felt we could not ignore, though, that people are experiencing online harassment in very real ways. And so, one, we also did our own survey. And a lot of it is what I think you have found as well. But part of what we wanted to examine was what the chilling effect was for writers and journalists who are members of Penn America. And what are they doing and what are they not doing based on the online harassment and abuse that they experience? One of the most significant statistics that we have found is that 67% of our Penn America members have reported severe online harassment. And severe is repeated and continuous. So it is not just one single instance. A lot of the statistics, I think, echo what Adam also laid out for you. But there's a bit of color that's missing when you just talk about the numbers. And I'm sure that all of the other panelists, at least Fran and I talked a little about this as well, that so often any recipient of the abuse online I think is one surprise to experience it and you wanna make it stop. But there is a disbelief that I also think happens on the part of other people. And so where allies and employers can come in is a really critical point and one that Penn America has tried to create, wrap around tools and resources for. And so we created about a year ago an online harassment field manual to basically provide all of those tools and resources for anyone experiencing abuse online and to be able to feel like they can respond in a meaningful way. Because absent that, I think that it is an extremely terrifying experience. One that can cripple you, can make what we found is that almost 20% or a fourth actually of our members ceased all online and social media activity because of what happened to them. And these numbers are really jarring, especially where we then are compounded by the reality that so many vulnerable and minority voices experience online harassment at higher rates than their counterparts. I think that it was something we felt we needed to respond to. If I could add on to the point about the disbelief. One thing that I think is very important and I'll tell you a little bit about my organization, I work for the National Hispanic Media Coalition and we were working on hate speech before there was online. A lot of our organization's work was about holding newspapers, radio, broadcasters, news broadcasters accountable for their depictions of Latino communities, both stereotypes, misinformation. A lot of the ways that they would depict Latinos in ways that honestly Latinos didn't really have a platform, any sort of voice to really push back on that. As our work migrated to kind of looking into how that spread over onto online, I think one thing that's very important in this conversation is to always think about the human element of the story. And I think that I know obviously you guys explored what responsibilities tech platforms have about moderating speech online. And one thing that I implore people to kind of stop and think about is if you're a person who's a member of a targeted community in general and then you are targeted online and for some people there's just no escape from that. You're always a target. And I think that there's a very real, tangible spillover from when you are constantly, whether it's harassed, whether it is that you are part of this community that is made to feel not welcome here. And I'll give you a few examples. When you think about the example of the Puerto Rican family that was having a picnic in a park in Chicago and as the woman was verbally assaulted by another person at the park and actually reached out to an officer for help and was met with silence, nothing. When you have a landscaper who was out in the front yard of his own property and he's being accosted by somebody walking by on a sidewalk. When you have somebody in a coffee shop who they are yelling at this woman to stop talking to her child in Spanish. And when we think about the way that some of the ideas that are online essentially harden into these beliefs, they're uncensored, there's nothing, there's nobody to temper them. They essentially carry over into an offline space where they do have real consequences because the truth of the matter is those people are still your child's teachers. They're still your doctors and service providers. They are loan officers. It has very real consequences that spills over. So as we talk about the statistical analysis, I think it's very important to take that analytical view. If we were to look at like the FBI statistics, if you look at their 2018 report, analyzing their 2017 numbers, there were over 7,000 hate crimes reported in the US last year. And when you look at the people who were targeted over almost 60%, it was actually 59% of them identified that the actual conclusion for why they were targeted was because of their race or their ethnicity. And so the thing is this has very real consequences because the truth is we might be one of the millions of people who never have this as being an issue, but if you're part of that targeted group, it's an issue for you all the time. And the fact that we live in a digital society where a lot of the times there's simply no escape from that, I think it's really important for us to understand even if you are part of the majority who doesn't deal with this as an everyday reality, the truth of the matter is people in the majority have to be upset about it as well to get movement. It's a great point. Let's bring you in to follow up on the point about law enforcement. I know that you have worked some on thinking about how targets can work with a legal system. What's their experience overall? Is there a one size fits all sort of experience in dealing with online harassment and hate? Yeah, I think it's really important. So I work at the Stop Hate Project of the Lawyers Committee for Civil Rights. And we launched in 2017 after the most recent presidential election to provide support to communities being targeted by hate with the understanding that one, hate in our country is not new. And two, there are groups on the ground who've been fighting hate for decades and we're not there to support them and they know their communities best. And one thing we've been doing is working with law enforcement and prosecutors to help educate them about the tools for countering hate and also how to be responsive to hate crimes and hate incidents in their communities, both offline and online, in a way that really respects those communities and that communicates appropriately what the police departments are doing, what stage of the investigation they're at. And so it's such an honor to be on this panel and some of the points getting brought up just remind me of some of the really heartbreaking calls we've received. So we have a hotline where people can call in and report hate crimes or hate incidents. And what we see from those calls is that what my colleagues are saying is it starts online, the impact is so real. So one of the very first cases we worked on was an incident out in California where boys at this high school had created an Instagram account and had gone through and created content on this Instagram account that mocked in incredibly racist and sexist ways all the girls of color in their high school class. And so it was pictures comparing their classmates to gorillas, they had a picture with a noose drawn around the neck of their classmate and then just laughing and LOLs and thumbs up. And when the girls found out about this Instagram account that their classmates that they sat with in class every single day when they learned about this account, the impact was tremendous. And unfortunately, I mean, this is why I'm so happy this panel is talking about it. Unfortunately, the impact on the girls was kind of lost in a conversation about first amendment and you should be able to post whatever you want on Instagram. And that was the conversation that the media picked up and the fact that the girls, you know, had severe physical manifestations of like the stress from that, that they had to sit in class with these boys to finish their junior and senior years, you know, and we've seen it in other contexts in the schools, you know, people drop majors. We had another person we spoke to who ended up dropping a major who, because that content was so triggering after she had been targeted by a troll storm, you know, grades can drop. It really impacts them. And the stat about that ADL found, you know, it's particularly relevant for young people that this is a main way of communicating. And if that is cut off, it has a real impact. And in addition, you know, we've talked to people who, you know, withdraw from the public space almost completely. And I know like journalists have had that experience. There was, I'm from Vermont originally, there was an African American, the only African American female legislator in the state of Vermont, stepped down from the position because she had been so thoroughly harassed online. And again, training law enforcement and prosecutors and how to respond appropriately is really important in that context. Shade, I'd love to bring you in here. I know from the Jewish perspective, there's been an incredible amount of anti-Semitic harassment online. It is something that we've noticed as an independent thing. And then also when it sort of sometimes combined, so for a journalist as a good example, where we found that there's not just, people aren't just going after journalists, there's also a lovely subgroup that goes after Jewish journalists as part of a hobby or whatever. How are you finding, are you getting the same sort of complaints? Are you getting the same experience when it comes to Muslim Americans as well online? I think a great example is we had an incident this past weekend where a gentleman, I think 19 years old, shot up a synagogue and he's also allegedly burnt down or he planned to set fire to a mosque. So I think in terms of spaces of hate, I doubt very much that one group will be satisfied with, okay, we have suppressed the Muslims, we have said what we had to say. I suspect that most of the data will show that someone who does not like a Muslim and speaks illy online would share similar sentiments with other groups. I'm originally from New Jersey and I have lived in the same home for about 25 years, the exact same home, same neighborhood for a little bit more than that. My father was a U.S. Marine and immediately after 9-11, there was only one incident in school where someone said something and it was immediately stopped by the administration and I'm from like a 99.9% Italian neighborhood. And now we have 2016 presidential election, I have younger sisters and they are seeing comments, again, they are born and raised in New Jersey and they're seeing comments online from our neighbors that, I mean, they have been our neighbors and no one has ever said anything to us before. So I think to some extent there is this echo chamber and people are a lot more comfortable saying things online that maybe they wouldn't have said otherwise. And especially within the Muslim community, I think women are much, much more vulnerable because oftentimes their hairs are covered and their religion is on their sleeve so they're easy targets, unfortunately. I'd love to sort of move to how different players are supporting targets. And maybe let's start with the tech companies themselves. So how have the folks, you all and the folks you work with and care about in vulnerable populations generally, how are the tech companies working for them when it comes to filtering options, when it comes to reporting of incidents, when it comes to responsiveness? I will step in here. I do think that, I'll say from my experience working with the tech companies, I actually feel like there are a lot of people who have a genuine good faith effort, like they wanna make this better. They genuinely are thinking about what are we missing? How can we bring in more voices? What can we do better? I think there's a legit. I think the one thing that does concern me, however, there are some platforms where there isn't a full-throated denouncement of hate speech. So when you have people who genuinely wanna get this right, however, there might be people at the very top who stop just shy of saying hate speech is not welcome, there's always gonna be a gap where we can always do more but I don't have the authorization to take that extra step. And where that comes in is to where you start getting into obviously content moderation issues that we won't discuss here, but I feel like as long as there's that little opening for certain speeches welcomed here that makes people uncomfortable, that makes certain people targets, you're always gonna have a problem, always. Other tech companies. I'd like to defer to maybe some of the experts. I think I was very fascinated by the fact that within the first 24 hours of the New Zealand attack, Facebook had taken down, I think, 300,000 shares of the live shooting and since then 1.2 million, what do you do? I mean, once something is out there, it's out there. Am I wrong? Yeah, I think, and that's one of the conversations that a lot of the coalitions have had and I actually choose actually a lot of their credit. I do think that they did make an effort to get that down and once it was spliced and repackaged to get all of the copies down as well. I agree with that. I think one thing that concerns me is not necessarily the larger platforms. Maybe it's some of the smaller spaces because I think that that's some of the most dangerous speech that happens and I don't wanna name certain platforms, but the truth is people are more willing to say ugly things in the comfort of their homes in anonymity and the truth is that a lot of these chat rooms or other places online are completely on, they're not policed, we can't regulate what's going on there and a lot of the times they're actually blueprints of the assaults and the attacks that happen in real life. So the truth is we need to really be thinking creatively about how do we address that issue because the larger platforms, they have the resources and to the most extent the will to actually make it better, but for those smaller and maybe mid-sized places that really don't have any sort of regulations, then what do you do? Nora, you mentioned sort of some statistics on when journalists had been harassed, the bulk of them, how they respond with some going offline entirely and adjusting. Is it an all or nothing choice for journalists, right? Or do you feel like there are sufficient sort of intermediate steps, right? I want a site, but I don't wanna get, yelled slurs at more than three times a day or I want to sort of, if you think you might be a bot, I'd prefer you not be able to post or I'd like to mute more broadly. Are there sufficient choices for journalists and others to make those calls? It's a great question. And at Pan America, some of the work that we've been doing, we think about preparing targets of online harassment so that preparation is everything. I mean, once you become the target of online harassment, I think that there is, it's just an uphill battle in trying to reset your security and your cyber protections. And so when you think about the tools and ways that we can arm journalists in particular in advance of and to hopefully combat and prevent online harassment, I think there are a few things to consider. One is the role that your allies can play. Cyber allies who are friends and colleagues of yours, they don't have to be family members necessarily, but one of the most interesting aspects of online harassment in comment threads, for example, is, and what we've seen from journalists is that the very earliest comments in a thread from an article often can set the tone for the rest of the comments. And it's not that surprising from a psychological perspective that hateful language may very well attract more hateful language and engagement at that level, but we found the flip side as well and that when there is constructive criticism, when there is support, when there's frankly just anything but hateful language expressed in the earliest of comments, it can help support then and avoid online harassment in language. And so what we recommend is that if, let's say you know you have a controversial or you are even worried about the reaction that your article and your writing may prompt from an audience, try to engage cyber allies and the community of people that you have. And we often provide tools and other platforms recommended if you don't actually have a ready and available cyber ally community, but reach out to those people and say, listen, I'm about to publish something on X topic. I'm a little worried about the way it may be received. I'm just worried generally. Would you be willing to post something as a comment on my article? And it doesn't have to be praise, it doesn't have to be glowing or Nora, you're amazing, but just something to be able to set the tone for what comments look like. It's a really practical tool and it's one of the easiest and earliest ways you can prepare in avoiding online harassment. In terms of the other aspect of what happens then if it actually strikes, I think that tech companies get a really bad rap for not combing through their data of the reports that they receive. And the truth is, I think there is some truth to that, but there is also our position is that you should always be reporting incidents of online harassment and hateful language because the more data there is, the more there is actually available to then hopefully use down the line. And so if you become the victim or a target of online harassment, we always recommend you think about whether you wanna block and mute the abuser, but once you do that, often the thread goes away. You can't actually see what is happening. And so there's this fine line of really making your own personal and complex threat assessment to determine what you want to do. And it really depends on your own experience. It depends on how often you're the victim and the target of different kinds of speech online. And I think it really depends on what your own personal threshold is. So many journalists say, I have a really thick skin and that's great, but part of what we try to embed in our work here with Penn America's online harassment trainings is there is often a calculated and really personal way that you need to weigh how you proceed. And the number one question we get is when and do I go to the police? And in preparation for answering that, I think it's really important that especially journalists are able to make the assessment themselves of what their next step should be. And we try to arm them with ways to document what is going on on these platforms, even in the absence of what I think tech companies may not be doing an adequate job of. Arush, what about people you've worked with, some of whom have become plaintiffs? Are there things, how are the sort of tech companies in responding to them in the little part of group? And then are there things you would love to see? Yeah, I think our work in terms of combating hate online kind of falls within our advocacy towards tech companies, which I know the previous panel talked a little bit about the Communications Decency Act and how difficult it is to use litigation as a tool against those companies. Not impossible, but that's another story. So we have litigation against, we've tried to go after the trolls directly themselves. So for instance, identifying, kind of clarifying some of the misunderstandings of the First Amendment. So for the case I was describing earlier with these girls who were the targets of this Instagram account, and it became a conversation around the First Amendment. And you know, one thing that kind of frustrates me is people will say like, oh, the First Amendment, free speech, blah, blah, blah. Sometimes they'll say you can't shout fire in a crowded theater. Everyone remembers that from grade school. But there's these other exceptions. And for instance, one of them in this case that was relevant was you can't interfere with someone's equal opportunity to get an education. And in this case, we filed an amicus brief saying, well, this speech was so horrific. It was so outside the bounds of decency that the girls in this case it interfered with their educational opportunities. And so one thing we do is file amicus or we've worked with some fantastic pro bono counsel to sue trolls directly when there is a legal claim. But we also hold various social media companies accountable. So we help educate them. So I know on the previous panel, they mentioned Facebook's recent change in policy where previously they distinguished between content that was white supremacist and that was not allowed, but white nationalist content was allowed. Lawyers committee for civil rights as well as numerous other groups engaged in a campaign to really educate Facebook's staff about why white supremacy and white nationalism are actually the same thing. And the history of that and of the history and context of the white nationalist movements. And so that's one thing we've done a lot of is just helping companies keep their terms of service actually valid and enforce them and educate them as to what white nationalism looks like. I think one of the challenges is is that it feels like a game of whack-a-mole frequently. So the rhetoric that white nationalist, white supremacist use is frequently changing. And so that's one of the challenges for companies. That's why the rhetoric shifted from white supremacist to white nationalist. They talk a lot about replacement these days. And so the rhetoric just changes very quickly. So yeah, helping educate. You mentioned before sort of multiple platforms, right? You said there's a lot of talk about the big platforms. They're not as much around the small ones. We're obviously, when people talk about small platforms now they mostly are meaning really horrible places, right? And it's sort of radicalization. But to what extent do you think that the experience of people online will actually generate some market pressure for new types of platforms that provide different services to people, different types of protections, different tools, different security to people who are at risk in other parts of their life? Do you think that is sort of something you're already seeing people seeking out, sort of thinking about harassment ability or whatever between platforms? Could you see it? Well, one thing I think that's really important is I think that some of the larger platforms set the tone for what some of the mid and smaller-sized platforms need to do. They almost set like an industry standard as to this is what we should be adopting on our platform. These are the types of rules of conduct. I think that one thing that Arusha brought up about the point about the terms are constantly changing. I think that that's something that it's really easy to go underground into a smaller or a mid-sized site because they might not have the same amount of expertise in being able to detect code language or alternatives in your rhetoric that maybe the larger platforms have gotten really good at detecting. What I would like to see is a little bit more effort in reaching out to whether it's academics or even to, I think that we could very easily tap into actual students who are constantly experiencing online hate because a lot of them are actually experts in it because they're targets all the time. And for example, when I went down to Louisiana to visit my nieces, one of the comments that came up was about being called an Oreo online. And the truth is that for most people, if you did a search, Oreo would not be derogatory. And for people who don't know, Oreo means that you're black on the outside, white on the inside. And the truth is that it's like, is that hate speech? If it's African-Americans targeting other African-Americans, you decide. But the point is that either way, it's part of her everyday. And we need to really be thinking about whether it's students or people who really, but especially students who don't really have an option to avoid being online, they are experts in this and they are not being involved in the round of expertise and being able to detect some of that code language that is really the most pervasive and dangerous. I mean, we have a generation right now of much higher levels of anxiety directly related to social media consumption and suicide rates amongst high school kids at an all-time high. And that's not necessarily a direct correlation or causation rather. But I think we need to have those conversations with students. I think oftentimes, like you're describing, parents don't understand some of that code lingo. And at least from our side, American Muslim institution, we've gone and tried to have these conversations with different mosques and different institutions to inform and educate community on what's going on in these spaces because oftentimes, at least as far as the Muslim community is concerned, the majority of the population of Muslims in America are very young under the age of 30. So they are the victims of a lot of these statistics you're seeing. If you were to go dig, dive a little bit deeper, you'd see it skewed toward the younger generation. So I fully support this type of notion and this is what we're trying to do. And one thing to add to that really quickly is that I think one thing that's really important is that that comment about being an Oreo. The other thing to flag is that if that came up on your feed and you saw somebody that you knew made that comment, and I'll full disclosure, I'm from Louisiana where people are allowed to say really unpleasant things that would be totally shunned in DC that would sound outrageous. Would you have called out the person that you saw put it on the comment feed? Would you have said this is unacceptable? And the truth of the matter is a lot of that is met with silence. And as long as it's met with silence, it will be acceptable. So to some extent there is a responsibility for people who maybe still are not the targets to actually intervene and say this isn't okay, I don't want this on my page. I also think, you know, I agree. From an AI perspective, the solutions and the ways that platforms are utilizing artificial ways to combat and monitor what is going on online, they're just not, they're not moving fast enough, I don't think. You know, the solutions that we need are not being met by AI properly. And there are a lot of what are called false positives when AI is utilized for content moderation of sorts and that those false positives, frankly I'm not even sure it would come up as any kind of harassment online, the term Oreo. I just don't know. Again, I'm a relative Luddite in the scheme of, you know, conversations about AI, but I don't think that we're in a place where we can have confidence completely in the ways that platform and AI tools are actually providing protections the way we would want them to. And that's a really interesting thing that I think we're just in a place where we're not moving fast enough. It's a great point, let me follow up on that, which is it seems that when tech companies decide when to use artificial intelligence or a new technology, they think about it in terms of individual rights. I'm about to kick some comment off, or I'm about to kick some person off my website. It feels like judicial in some way. And as a result they say like, just one person if I'm wrong, that's really problematic. Particularly when we're talking about white supremacist or harassment, this isn't quite as concerning when it's dealing with say ISIS or terrorism or child pornography or other things like this. I'm sure my photos of my kids bathing or will occasionally get weeded out and that's sort of like the price one pays, never responds. Is that the right approach or is that the only approach to the way to think about these things? Could you imagine a world where certain AI wasn't good enough for deciding whether to take a comment off of a platform overall, but was good enough for a journalist to filter based on it, their own choices. So that each individual could kind of customize a little bit more. We have that a little bit with Google safe search and things like that. Are those the type of options that folks would want, not want? What do you think? I think that that's a good first step. I think that the truth is that we need to really think about what the internet's gonna look like in five years, 10 years, 15 years and that we are instilling the building blocks of that ecosystem right now. And so the truth is that as long as we do not have full-throated denouncements that certain things will be allowed on platforms, we're always gonna find a way to like have them like in a certain part of the garden. It's kind of like, it's not welcome here in the general space, but there is a place where we'll go ahead and put you onto the side. So those filters help you take it off of your feed sometimes, but it won't necessarily eliminate those comments from the platform. So I think, yes, it's a step in the right direction, but it is hardly adequate for the internet that we want for tomorrow. I think the other issue is that it could cause people from highly targeted identities to kind of limit how big a space they take up in the public sphere. So I would worry about that, yeah. All right, let's do a Q and A. Why don't we sort of do sort of two or three and then open it up? And if you don't ask questions, I'll just keep asking questions. Someone is coming with the microphone. Ilhan Kaggeri from the Muslim Public Affairs Council. So you were talking about harassment and I'm not quite sure I know what harassment is. I've responded to Twitter posts or whatever and sometimes people will come back to me with something not just obnoxious in the sense of response, but just filthy, right? And so at that point I changed my behavior. I just said, okay, don't engage, don't even touch this thing anymore. And it's actually made me, I'm a very courageous war person. I've lived through war zones, but it's really made me timid in terms of responding to certain controversial things. So a one-off, not just nasty, but really filthy response, is that harassment or is that something, does that fall into a different category? So I think there are two ways to look at it, right? One is from the subjective way, right? Like, did it matter to you, right? And did it change your behavior? That's one part of it. And there's also some things so that we can have some, what's called a quote unquote objective way so that we can compare this over time and get a sense, right? And so the reason why in our survey we actually sort of had any harassment which included things like name calling, maybe the stuff you talked about, but then also sort of looked at severe harassment is that I think some people gravitate towards a more inclusive and some less inclusive. I don't think it makes one more important or less important than the other, as much as we sort of think about all of them together. It seems to me that part of the harassment is not just hate, but it's to actually shut down the discussion or to shut down people being exposed to those kinds of ideas. And so the one-offs may not, it may be a one-off individually, but what it really is is actually a sort of a policy of just shutting down a whole group of people, which is what the bots do too, but I think that's a separate way to also look at it. Like, yeah, it was an individual thing, but it's really meant to shut down a group. It's meant to quiet an idea or those kinds of things, right? Let's take one or two more questions and then I'll open it up. Hi, I'm Michelle from Ranking Digital Rights here at New America. A couple of you alluded to the necessity of perhaps accepting that some things that we might want to post or some things that we might want to do may not be possible if we put in rules, lines, regulations to protect people and address this extremely serious problem. You mentioned the example of, well, maybe I can't put pictures of my kids in bathing suits or bathing because it would get caught up and it's hard to, it would come up as a gray area type topic and at least in an algorithmic assessment. Could you give some other examples of things that right now we're able to do online affordances that the platforms make available that might not be possible if we put in strict rules to guard against all of these harms? Maybe to kind of help understand the trade-offs that are at play here. Sure, who wants to start out? I don't know if this answers your question directly, but I know one thing that we've had a lot of conversations about is how policies and rules can backfire to really target minorities and creators of color who are putting out content and to shut down their ability to have their messages heard. So I think that's one thing we always try and keep in mind when we're talking about advocating for a change in policy, but I'd be interested what my colleagues have to say. Anything else? I was thinking that the point of lives matter as well. That's fine. Other comments from panelists on the trade-offs or on this idea that harassment seems to be taking on an individualistic lens, right? To put words in your mouth, right? That it is about you, the person I'm sending, but really it is part of a broader message in trying to sow either silence or fear in a broader population. Well, so one of the things we do is we actually define online harassment because I think that it's important to define it. And so a lot of the discussions that I have with journalists in particular is trying to lay out for them that broad definition, which is, and I'll read it for you, it's the repeated or severe targeting of an individual or group in an online setting through harmful behaviors. It's very broad and I think that it's kind of intended to be so much like to echo what Adam said that some of this is subjective based on an individual's experience. But it does beg the question of what do you do then? And so a lot of our work now is thinking about counter-speech and how can targets of online harassment respond, if you will, to what's happening to them? And not all of us are famous, not all of us have a massive Twitter following, but I think there are some really interesting ways we're seeing people kind of co-opt and take what has happened to them. One of the examples is someone like Roxanne Gay, I know, who seems to me to be the queen of clapp-backs and her way of dealing with online harassment is to respond directly to someone. And again, she has an amazing following, so it might not be right for everyone, but it's a really interesting and clever way of just shifting narratives. I'm always collecting stories from the writers and the journalists and other Pan-American members that I talk to about what they're doing. And one individual I know said, I always respond to comments that are hateful language with a photo of a puppy. And I do that, and then that's it. And they don't respond anymore, and that's it. You know, I think that those are ways that we actually have to begin. There's cat-loving white supremacists coming, too. Oh, I love it. Great. You know, Jonathan Weissman was another one who had been the target of terrible online anti-Semitic rhetoric, and he actually took a break for a long time from social media, and when he came back, he, you know, again, co-opted the parentheses, you know, on his Twitter handle, and then throughout all of his writing afterwards to, again, try to shift the narrative of what had happened to him. And so I don't know why people do what they do. I think some of this we know from psychologists is that, and the experts we've spoken with through Pan-America is, you know, the online disinhibition effect that people don't actually feel accountable for what they're doing online because no one can see them or know who they are. But really, I don't know what the longer answer is. And so our thoughts and our sort of position in developing recommendations on counter-speech is to do what feels right to people. And part of it, I think, the strongest thing is to build communities of support. And there are amazing organizations that are also doing that. If you don't have your own support system or cyber community, there are groups like Heart Mob, which is exactly what it sounds like. It's a group of people who, it's a mob, and they will come basically give you a cyber hug and do all of the things online that friends would maybe do. And I just think that in our promoting counter-speech, it's part of what we're hoping to do is also help shift how we think about the way online harassment often ends with someone taking a break either permanently or even for a small period of time from social media. All right, if there's more or two more questions, we can do that, Carol. Hi, Cheryl Lianza with UCCI, I know is on the last panel, but since there wasn't a lot of hands, I decided to go ahead and, and I just, I guess what I really wanted to do was lift up this conversation because I know there was two threads, which is, are there things that we're not gonna be able to do online if we take this seriously? And the old First Amendment framework, which is, well, you just fight bad speech with more speech. That's supposed to be the solution. It has always been the solution in our analog world. But to me, the difference in the online space is that bad speech actually does shut down, just what Nora was saying. People go offline, they don't say things, they don't, they don't express their view anymore in the same way. And so, I don't know if you guys have more examples of that kind of circumstance, but I just, it changes the First Amendment calculus for me, not clear where we're going with it, but I think it's an important point to make sure that we're focusing on that it's not harm, not that his speech was ever harmless, but it's much more able to block out speech than I think it was in other media. Another question. Hi, Sharon Bradford Franklin. I'm with the Open Technology Institute here at New America. I'm curious to hear more. I hadn't heard much before. Arusha was talking about some litigation strategies and you said about interfering with someone's right to receive an equal educational opportunity. I think that was tagged back to the earlier story about the awful Instagram account. I'm just curious if you could talk a little bit more about who is actually getting sued and how much of this is going on. Yeah. And can I actually, while you answer that question, can you also talk a little bit about what it takes to be a plaintiff in this? Cause I think it's important for us not to think that litigation is an easy thing to do. That is very true. Yeah. Seriously. And just to pick up on that, one thing is it's really hard for us to find people who wanna serve as plaintiffs. If we can even find the legal claim that we think we'll meet the basic standard for going to a court. But as Nora and my colleagues I'm sure know, every time we talk to someone, we as their attorneys have to say, this is what you're opening yourself up to if you decide to go down this route. Even if it's not, I do voting rights as well, even if it's just being a plaintiff in a totally different case, it stinks. It can take years. You have to retell the same story over and over again and sometimes that can be traumatic. Yeah. But in terms of the legal claims and the types of plaintiffs we and other organizations have seen, so right now for instance, there's three lawsuits against Andrew Anglin, who's the publisher and creator of The Daily Stormer, which is one of the most prolific neo-nazi sites. So there's three lawsuits against him. One of them is ours. One is by the Southern Poverty Law Center and one is by Muslim advocates. And each of those cases is kind of at different stages. But the cases we see have various claims. So for instance, in the Southern Poverty Law Center case, they have a defamation claim. So that's very common in this type of online trolling. There's also invasion of privacy claims. There's bullying and stalking claims. So the earlier question about what is harassment, I was thinking about it a little bit in legal way. So stalking for instance, each state might have a different take on what that constitutes. So we'll spend a lot of time counting. Like how many times did this online troll send that person a message? Does it meet a court standard for stalking? And we also have Human Rights Act claims if a state has a good Human Rights Act. So it's really about being creative and finding legal claims that might work. And then also finding plaintiffs who are willing to come forward and put themselves out there. It takes a lot of courage. And I mean, it's really an honor to work with the people who have had this experience and are willing to continue in the public sphere essentially to try and get the law to change and to try and get some accountability. So in our case currently, we've sued Andrew Anglin and we've also sued some of the individual people who engaged in the troll storm. So yeah, we were able to reach a settlement with one of those defendants who was formerly a white supremacist and troll. So that was a really groundbreaking moment for us. But yeah, it's tough work. I think it's a great one to end on, right? Which is that it shows that there are some optimism, right? You can get stuff done. It's gonna be hard and it's gonna involve lots of sacrifice. And so I'm thrilled that you all were doing this and thank you so much for joining the panel. So if everyone can give them a warm applause. And thank you all for coming here and spending a couple hours with us. Thank you.