 Good afternoon. I'm Peter Bergen at New America. Thanks for joining this event about the tech content moderation. This information, terrorism content we're joined by two of the nation's leading experts on counterterrorism David cotton steam Ross, who's going to talk a little bit about a new paper that he's published redrawing the lines and assessment of the impact of anti censorship legislation on terrorist content hate speech on harassment and misinformation, who's he's also the CEO of valence and frequent testify before congressional committees, and has worked for DHS and is has a PhD from Catholic University, and also were joined by Karen Greenberg, who is the center of the center of national security for university. She's had a long career, writing about counterterrorism has written multiple books on counterterrorism. PhD was in history from Yale. She's also senior fellow at near America. So, David, let's just turn to you if you could give us a high level account of your paper. I think get into a discussion of the issues it raises. Thanks so much Peter it's great to join you, and I definitely appreciate New America for hosting this discussion. I'd at the top like to acknowledge my two co authors on this paper, Maddie urban and Cody Wilson, both of whom are in the audience today. It's looking at this phenomenon called anti censorship bills, which have been introduced at the state level and implemented in a couple of states, and look like they're headed for a Supreme Court showdown in the not too distant future. I think all of you know, over the past, you know, half decade plus we've had two opposite trends online. What is there's been an absolute proliferation of harmful content. Disinformation and misinformation terrorist content hate speech harassment and social media companies have been called upon to deal with this harmful content but at the same time, the role in doing so has become increasingly controversial. This debate about the role of big tech in content moderation, and in society reached a crescendo after President Trump was banned from Twitter and Facebook in January 2021 after the January 6 attack on the Capitol. These bands were prompted by companies assessment that Trump's postings posed a risk of violence. But even before these bands, the anger of many political conservatives have been simmering due to perceptions that big techs content moderation efforts were compromised by political bias. We've had in the wake of Trump's bands, anti censorship bills that would constrain social media companies ability to undertake content moderation, that is, you know, pulling material offline, banning accounts, or doing what's called shadow bands, which is making them not promote material anti censorship laws have been introduced in over a dozen states, and have been passed into law in Florida with Senate bill 7072, and in Texas with House bill 20. So in the study redrawing the lines Maddie and Cody and I look at this through an adversarial perspective. These laws were in place what can an adversary do with it. And our basic argument is that HP 20 and SB 7072 go too far in imposing rigidity on platforms content moderation. And especially bad actors, whether they're engaged in hate speech or missed disinformation or terrorist content are very keen to exploit loopholes and exploit a system which is constantly changing to adapt to them. So just as an example of this in Florida, the limitations that are imposed by SB 7072 are that social media companies have to publish standards and definitions that the platform and I quote uses or has used for determining how to censor D platform and shadow companies have to apply this, these these takedowns in a consistent manner. They have to inform users about changes to rules ahead of times, and they can't change the rules, more than once every 30 days. Florida also provides that political candidates cannot be D platformed and journalistic enterprises can't be D platformed. It's when you kind of go past the surface and look at the net effect of this, that the kind of mind boggling nature of the attempt to constrain the platform becomes clear. There's a constant volume, right like for example in the first three months to 2021 Facebook removed 8.8 pieces of bullying and harassment content, 9.8 million pieces of organized hate content, and 25.2 million pieces of hate speech content. When it comes subject to litigation or to complex appeals process, then suddenly you have a system where content moderation really can't take place, especially, or they put it a little bit more constrained. It's very difficult for it to take place because of the volume of the material, the fact that each one could be subject to litigation, and the damages that they have to pay out could be quite high overall, given the volume. You know, for another context and then, then I'll kind of wrap up is in the first three months of 2021 YouTube removed 1.16 billion comments that they found to be in violation of policies. So, overall, what this does, what the what anti censorship bills do is they are your response to some real problems that you exist on the platforms, I would say, right social media companies are not simply actors who are innocent have nothing to do with the problems that have occurred but despite all of the problems, they're called upon to deal with real harms on their platform and creating rigidity, creating a right to sue, which is going to slow them down and make them very risk averse in terms of leaving bad content up and creating certain loopholes like political candidate loopholes and loopholes for journalistic publications. So if we actually game it out and look what an adversarial actor could do with it. It suddenly becomes a lot of different hoops that bad actors can jump through to make sure that their materials remain online town. First of all, I want to say, Debbie, thank you for this wonderful report is really terrific. I'm going to give it to my class to read when we do section 230 and some months so I think it's going to be very helpful just to set the context. I want to start there today which is the context, which is that we are facing a number of issues as a country as a globe where we keep getting to the point where we say, it's too complicated. It's too complex. The legal universe is just going to be so prohibitive and costly going forward. And I am all of these things whether it's climate change, whether it's the globalization of economics, whether it's pandemic. I'm really think that the issue of that we're facing at the Internet with disinformation really brings together a lot of these issues about complexity. And I just want to put out there at the beginning that yes it's complex. Yes, it's complicated, but that can't be a substitute for it's too complicated. And while, you know, there is no magic wand. I also think we're going to have to make some really hard choices about benefits and and and detriment set, you know, we might face going forward so just to push back a little bit about the end of what you said, I agree that they are there, no matter which path we choose, whether it's more content moderation restrictions on content moderation whatever it is, we're facing a very, very long conversation that they are, we're at the beginning of. One of the things I wanted to point out was that other countries, other entities, other organizations outside of the United States have sort of given us a little bit of, of sort of a prelude to what to do and what not to do and I'm thinking maybe of this 2017 law criminalizing hate speech and some more legislation of this ilk more recently, maybe in 2018 France passed legislation to remove fake news during election campaigns. You see very little of this referred to in the American conversation I think we need to own what's happening in the rest of the world, more this summer the EU adopted the new Digital Services Act. There are countries like Turkey that passed a bill, I think last week penalizing disinformation, jailing journalists and social media users. And this is to your point, these laws can be used in a variety of different ways right, but I still think it's instructive to look at these and I think one, another one that came to my attention weekly was Uganda, which passed a bill banning false information and hate speech so it's, it's not just us it's all over the world in different kinds of countries for as you said in the context of each very different reasons and I think we need to think about these laws in Florida and Texas, in particular, the one in Texas, being broader at the one in Florida being more narrowly focused but both important about, you know, what the courts can do in regulating content, and that of course involves what Congress can also do regulating content. In terms of regulating content, I just want to say the phrase anti censorship is really fraught, because, you know, censorship is the is is what the those who would like to, you know, promote this, the fact that, you know, there should not be content moderation is used. And so I prefer to say content regulation then anti censorship only because it seeds to the censorship point which is really not exactly all that the conversation was about and I think sort of pushes aside some of the more important things that are happening. Another thing I wanted to bring up just at the beginning and I don't want to go on forever is that this moment has been coming for a long time that and the moment has been a lot of different parameters but one is, what is the real relationship going to be and between the public sector and the private sector, when it comes to the internet writ large, and when it comes to disinformation for the purposes of this conversation. What is happening in the courts is trying to figure out just what that relationship can be and should be. One thing about big tech is that big tech outside of the disinformation conversation, and more directly in the privacy conversation is having a parallel they're not, they can exclude one another they they're they're going to have this is an evolving conversation, but just what right does the government have to intercede when it comes to speech, when it comes to these platforms, is it issue as sort of a deeper that we need to, to, to address in the context of the long standing, no idea that government should not have too much interference when it comes to free speech, whether you call it publication common carrier whatever it is, we're not going to get away from what it means to interfere with speech. You know I think we should also talk a little bit today about what happened with the disinformation governance board at DHS, I think it's really important, you know it started what in April, and it was declared over in August, and the idea was to study best practices for combating the harmful effects of disinformation. And I think while a number of NGOs have taken this up to look at what those harmful effects are, and how to remedy them how to counter them how to prevent them. It's, it's sort of a loss that we don't have, we haven't yet decided at the governmental level how to deal with or if to deal with the issue of disinformation. The other thing I wanted to bring up in this public private conversation is, or at least that I'm raising here is that there may be other ways to go about the same thing and David I'd really love to hear you on this for example. Do we really have to address content moderation in these particular forms are these cases, or are there other ways that have to do with best business practices that have to do with something, a number of critics have raised, or students of disinformation, which is, you know, how do we, how do we think about competition and antitrust provisions and monopoly in terms of what this means in terms of restraining speech, you know, in ways that we're discussing. And also issues about privacy, could we get at it in other ways that are not sort of trying to make some kind of very simple over simplistic and very, I think, long lasting and essentially to your point, destructive potential outcomes. So also there I have many other things to say but I think they'll come up in the conversation. Well thank you Karen thank you to be for these opening comments and I just want to remind the audience you have a question we're using Slido it's located on the right side of the video just enter it and we'll answer your questions. So we've had a little discussion between the three of us. So, I mean there's so much to ask about this quote about this. You know one I defeat I thought the paper was was very strong and written a very kind of accessible and interesting way. And you had an interesting way of describing misinformation disinformation lies fake news etc and called it called it polluted information which I think is a nice way of kind of summarizing it. It relates to this question that the current raised about, you know the disinformation bureau at DHS was sort of dissolved. So where to begin but one thing. I mean, I'm still, first of all, your, your, some of this research was supported by meta. What is, what is their position on, on what you're writing about. Obviously meta opposes this, generally speaking any social media company is going to oppose laws that you threatened the way they go about doing business I appreciate you raising that caveat. I will state also that though part of the research was funded by meta, we also had you know free rein in how we read about the topic. This is this has been a fairly long standing position that I've had on these bills. Well, so what is the, what is what is the position you're advocating rather than the position that you're kind of opposing as it were I mean you're saying that these Texas and Florida laws are sort of too intrusive we're going to backfire or whatever but like what what is, what's the affirmative position. Well, what Karen mentioned before about best business practices is I think where the position is currently. That companies have complex relationships with a variety of governments. Some governments as Karen has pointed out, have laws that require much more in the way of content moderation. She mentioned Germany she mentioned France she mentioned Uganda. And so in those cases there's going to be a much more restrictive regime Karen also pointed to a number of places where the restrictions are not necessarily good restrictions there's a variety of countries, which have you know blasphemy type laws or, you know, or restrictions on criticizing the government. So you have all those different laws that are in place and social media companies have to comply with them from one country to another. So it comes to countries that have relatively libertarian speech regimes, and the US is one. Usually you'll have relationships between tech companies and a variety of stakeholders. So if you look at meta, if you look at Google if you look at Twitter, they have relationship, they have public policy teams which have relationships with government. They have relationships that have relationships with various civil society stakeholders, different groups for example that can be victimized by hate speech, and they'll have ongoing dialogue and figure out where they think the line should be drawn. My own view is that we're in this very dynamic moment where you have both a variety of kinds of toxic speech, harmful speech, then on the other hand, harmful desires to censor people who don't think like you do. And all that makes for an explosive cocktail. I personally think that that also we should point out that the desire to censor on the one hand is very much at odds with the technological environment. We have a technological environment where at least until you have quantum computing breakthroughs, people can get a greater degree of anonymity in a variety of ways and to end encryption. It's easy to put speech up. There's too many different platforms by which you can get it out to the internet. And so with all of that, I personally favor a more libertarian approach, in which companies are able to set their own different competing standards. I think it's fine that a company like Gab exists. I don't like Gab. It is one of the most dangerous places on the internet from her perspective of everything that we're talking about. But it's fine that it exists. It's a place that people can decide to go to. You know, if their policies cause people to die, then that's something that can be dealt with in other ways. Right. And I think that there should be liability when people are putting up speech, which is designed to cause harm. And leaving all of that aside, the affirmative case at the end of the day is, I think that I agree with with Karen that that complexity is not a reason necessarily to avoid government policy. The complexity with competing imperatives tends to be a reason to allow some diversity of practice so we can get it best practices. And so I'm not for anti censorship laws, but also generally speaking, I don't see an imperative for government to require greater content moderation either. That's a really interesting point because you know what what you're really saying I mean I'm asking this kind of as a question but is that is that this is something that is happening in this public space one way or another, that the internet has empowered it, and that in a way, there's not much we can do about it. And so my, so my question to you is, do you think the internet has made this and, and I really mean this isn't a rhetorical question has made the spread of hate which we've seen an awful lot of over the past of history, and violence in the name of hate, right, let's not forget the 20th century, not and the, as well as the 21st. Do you do you think we over emphasize the fact that the internet is a spreader of violence, and we know it's a spreader of disinformation, but in the next step, violence, do you think that it's, it's a game changer or not. I'm just trying to you. Does that make sense. Yeah, I mean I absolutely think it's a game changer. And, you know, I don't think we over emphasize it I think if anything we under emphasize the degree of of spread of violence. I think that, and so let me kind of divide up kind of two separate areas of thought. Really what I've devoted a lot of thought to is anti censorship laws, and like ultimately, I come down against them. And Karen, I agree exactly with what you're saying that it is a very fraught term. I just want to clarify that I use the word that the words that were used introducing them I tend to use organic terms for for different groups, and so like, you know, so called patriot groups I'll call patriot groups right that like I let people use whatever term they want to describe themselves, but for these anti censorship laws. Then there's this whole case of well should there be more obligations on social media companies just before this panel. Before we got on we were talking about litigation before the Supreme Court that maybe section 230 doesn't provide protection when terrorist content inspires violence, the Supreme Court will be taking these cases up. And that's something where kind of the affirmative case is something I've done much less intellectual work on, but for the question of, do we over emphasize the role of the Internet in fostering violence. Absolutely we do not overemphasize it. I think that there's a lot of different things that our digital lives are doing. I think it's it's altering patterns of radicalization. My colleague Maddie urban before this call. She and I are just finishing up a paper on what we call composite violent extremism, which we think is, you know, a new trend that is really driving, driving a lot of violence in ways that just don't fully match at all pre Internet trends in violent extremism. And so this is a very important discussion for that reason. And we're captioned, like every discussion is captioned by alternatives. So my advice to big social media companies is generally speaking they want to draw as big a tent as possible, and to allow as many, as many accounts as possible to be on their platforms as possible, because if you drive people off of Facebook if you drive people off of Twitter, they go to gap, right or or or alternative or other alternative platforms which researchers call malevolent platforms that creates a market space in which, you know, if 50% of people are driven offline and obviously we're nowhere near that amount, nowhere near that. But if say you try 50% of people off to declare 50% of views off limits, then suddenly you create a huge market space from a level and platforms. And I think even when you get into the case of government, taking a role determining that we need to draw more hate speech off. I totally get that. And, you know, I think that part of what we have in the US is a lot of people don't trust government to draw the line right. And I think without getting into a debate about that. I'll just say that if people feel that government is requiring a certain amount of content moderation, and they don't agree with it. Then there's lots of alternatives that they can use to still get things off onto the internet, or to still read things that the government is trying to suppress. And that's part of the picture when we look at it beyond this more narrow corner of whether anti censorship laws are going to cause problems or not. That's right. I think this, right, I mean, even the way you sort of set it towards the end I mean the idea of government controlling reading, taking the libertarian right, the government controlling reading right and some of the, you know, presidential cases in this regard that are used to sort of in that, in that way. And of course I agree with you that the internet has inspired more violence and then the question is, is what to do and I do think although I know you see them as separate things that when you look at Texas Florida litigation and those cases versus what's happening in the Supreme Court, we're really poised right now to some major sort of, you know, changes in how we're going to think about what can regulate the internet, what can't, what the best practices are. And so there's a couple of things in this regard. One is, what about this call for more transparency, more transparency on the part of big tech to say what they're doing, how they do it, what their criteria are and there's been some movement in that but overall there isn't this, you know, sort of broad sense of transparency. Do you think that more transparency will and codified or regulated transparency about how these algorithms are done, what they've done that are available to let's say journalists, researchers, people like yourself is one of a potential mitigating factor. I mean I'm just curious where you would stand on that. I don't necessarily have a view on it. So, as I mentioned, this report is written from an adversarial perspective, right, I wanted to look at how an adversary is going to look at it. And if I'm looking at it from an adversarial perspective, what I would do with requirements of transparency is number one, I would file litigation, as much as possible, and try to get as many transparency reports as possible to know exactly where the lines lie. And number two, I come through them very carefully in order to determine exactly where the line is so that I can go right up to it and push out disinformation, misinformation, you know, terrorist content, while at the same time crossing the line. At the same time, there's lots of valuable reasons to have transparency. So from an adversarial perspective, there's clearly something you could do with it. That doesn't really make it bad. And obviously as a researcher I do like transparency. I can ask you a question which is, you know, there's been discussion of section 230 and the case that the court is taking up related to that. Can you explain to the audience what section 230 years and why this case is important and what it could mean. Is this me? Yeah. Yeah. So let me, so basically section 30 and I'm just, you know, kind of making it very simplistic, but, you know, basically says that, you know, you can, you have immunity. If you're a platform you have immunity from when it comes to content, putting up or taking down. Right. And so that's why you see both the left and the right, sort of talking about section 230 because the, I mean, I'm using the left and the right in broader terms and they actually mean but you'll see, you know, people who lean Democratic wanting to have more moderation, right, under section 230, and people on the right, wanting to have less moderation, also under section 230. So it's a, and one of the things that's come up over section 230 and you both will recognize this from prior cases that have to do with security versus, you know, security versus legal issues is basically the question of, you know, is this too vague a terminology if it can be embraced by both sides for different reasons, then is this, you know, is this too vague how much does the language really help us understand what 230 is about. And that's why you have all these different kinds of cases and so just to bring you up to date on the case that I think is really going to be on our in our sites coming up which is Gonzalez v. Google which is a section 230 case which is about, the question is, our recommender systems in other words that which comes through via algorithms in terms of how to push out a message and to whom to push it. Are these, you know, are do we are they covered by the liability exemptions of section 230 of the telecommunications act are they covered by it. And again, this is a question about where what's the specifics and what is, and what's too general to be able to tie this down to the Supreme Court to, to look at this and the case is one of that I'm, you know, that's really in our all three of ours wheelhouse which is the Paris attacks of 2015 carried out by ISIS, killing 130 plus people, and American killed Nohemi Gonzalez, her family filed suit claiming that YouTube owned by Google, the YouTube's recommender system which pushes forth content according to users profiles were in part responsible for her death. Google claims section 230 as its defense, saying it provides liability protection for content as an internet service provider platform, the lower courts in this case have ruled in favor of Google. The family says no section 230 doesn't cover this kind of content moderation. And so really what this, this argument this case, this decision will be about when it comes to Gonzalez be Google is does section 230 extend to these algorithms presented by the recommender systems and actually, when you, I mean to David's point about you know how much has changed when you watch the degree to which technological understanding technology is so important to be able to rule on this decision. I'm sure you, you, you remember in the early days of prosecutions about terrorism, like, who was going to explain what was going on in terms of terrorism who the terrorist groups were how they were affiliated how they were funded. And, and we're still in the same place here over technology, even though we're so many years later. So it's actually a very important case and it could have some very dire consequences which I think David referred to before. Well question for both of you so in David's paper he makes the point and it kind of goes to what Karen was saying, and David was agreeing with this question. So, you know, ISIS recruited 40,000 Muslims from around the world to come and David's paper says you know it's the biggest wave of sort of foreign fighter. It's an instigation, even bigger than the Afghan war against the Soviets so I think we're all in agreement that the internet had an effect on this wave of, it's just a fact right so sort of isn't it. If that's a fact and that the social media had this, you know, very big recruitment effect for ISIS, then, you know, how does that affect this Gonzalez versus Google debate because clearly social media companies did have a role in recruiting people to ISIS and inciting violence. Well, in a way and then I'm going to W you should, you know, but I just want to say the other case Twitter versus Tom now actually speaks directly to that as well you know it's specifically the question there is, when is a company liable for a betting terrorism. ISIS posted recruiting fundraising stuff on Twitter, other social media platforms, and the case here is brought by on behalf of Jordanian citizen who was killed in 2017 during an in Istanbul during an ISIS attack. So that's that is really the question and Twitter says the lower courts improperly expanded the scope of the anti terrorism act, but to your point Peter, it's, it's, this isn't. This is really the question in the time that case and they're deciding them, you know, these are the main decided together, they were decided together by the ninth circuit. And they're seen as a package to just to see both sides of this both in terms of how you target the user for what you send out. And in terms of what you actually put online in these case in terms of recruiting and fundraising and I think probably the ramifications for that will go well beyond that because just what is also put online is disinformation. And so this is going to be a very no matter what happens, the implications of these things if they're, if they're misused of these decisions could be much broader than the cases themselves and I think that's what a lot of civil liberties advocates are worried about. But Debbie what do you, I know you have some really good thoughts about these cases what do you think. Yeah, so, first of all, Peter's point, the causal effect is clear, right the fact that the ability of ISIS to recruit or to carry out certain attacks was linked to their use of the online space. I think that's very clear. Karen mentioned, elliptically a little bit earlier that that I saw these as separate issues, because we're talking about this just before coming on. And what I was saying is that I see that both of the cases, whether we're talking about the Gonzalez case and the case versus Twitter as coming from the opposite direction of anti censorship laws they're actually very much linked. They're just coming at it from from completely different perspectives. So, anti censorship bills are designed to make it harder to pull down content. If you pull down content, you can face liability. The lawsuits are designed to create an imperative to take down content. If you leave content up, you can face liability. And so to put it back in the content of anti censorship laws. I think one thing we're looking at is, you know, I wouldn't say even the potential, I think we have a reality that social media companies have very tech companies writ large but especially social media companies have very competing imperatives across different jurisdictions. And with the Supreme Court cases, and then the anti censorship bills or anti content moderation bills. They can have within the United States competing imperatives liability for leaving content up. If the Supreme Court design decides that section 230 does not protect them from liability, and also liability potentially for taking content down if the two anti censorship laws go into effect. So to me that's the area of intersection that you potentially have in a real way competing imperatives put on the companies at the same time in the same jurisdictions, which is interesting. Yeah. Switching gears with you so obviously a lot of content moderation is already probably being done by AI and that will kind of increase geometrically. What are the, you know, positives or negatives of that. I think that the positives and negatives are, you know, basically the same, right so the positive is that it allows content moderation at scale. And when you're looking at I mentioned before the 1.16 billion YouTube comments that were pulled down in a three month period. Obviously you're not going to do that through human content moderation there's no company that's going to hire a workforce big enough to pull down 1.16 billion comments in a three month period. The negative is that when you do content moderation at scale like that. It's going to be, you know, very rough and imprecise comments that are innocuous will sometimes be pulled down. I think most anybody who has ever had an argument on social media will have some experience of something pulled down that probably shouldn't have been. And on the other hand, you'll have a lot of material stay up that may be violate terms of service. So I think that that that's the positive and the negative and they're both flip sides of the same coin. You know, somehow, I feel like we're missing a part of the conversation, which is, you know, we're putting all of this burden which is impossible as you just described it onto these social media companies, putting it on our legal institutions to figure out one way or another by punitive measures by constructive measures preventive measures, but there's another side of this which is educating the user. And I know there's a lot of talk about this digital citizenry and, you know, but actually, it seems like a very important part of this which is so but that presupposes that people care whether information is disinformation or not right. But what I'm really trying to suggest here is that we're in such a tense divided. This is not, you know, rocket science period of art of our history at this moment that it's sort of exacerbating all of these issues and I'm just wondering if if we think about the deeper causes of all this anger and hatred and and and just cruelty. Do we need to be thinking about that not putting it on the tech companies but really need to understand just how interrelated these two things are or is that to sort of, you know, wishing for something we also can't fix and at least this we can try to amend one way or another. Peter what do you think about that. I think a little pitch for colleagues of ours at New America Peter Singer Lisa Guernsey are working with the state of Florida, you know, for to create you know to help digital literacy for kids, you know, and clearly that's a least partial solution. Yeah. And for me I don't think it's unrealistic to say that Karen, I mean it might sound unrealistic. We're in this kind of deeply divided period, but so one point I make frequently to people I know at tech companies who work in disinformation is that I think the contentiousness of an issue and the degree to which people feel like they're not heard, or that they're shouted down has an impact on their susceptibility to disinformation that if you had more of an even tone, and less of a contentious tone that you probably would have less susceptibility to disinformation. So we can bear this out just by looking at your Russian information operations, like we know for a fact based on voluminous evidence that Russia would have operatives on various sides of issues. So like a good example in the pre COVID era, and also in the COVID era but pre coven be anti vax debates, where they would have both, you know, operatives associated with the with the IRA the Internet Research Agency, who would post your anti vax posts that you'd have other IRA operatives posting completely obnoxious pro vaccine posts, and you know both of them would just ramp up the ramp up the volume and the heat on that debate. So I think that that literacy is important. And there, you know we're talking about political bias in the context of content moderation. So, one thing that I think is very important is that any digital literacy effort cannot be a partisan effort. And I've seen a number of places where questionable decisions are made in terms of what's in what is information versus disinformation. I talk about a couple in in the paper, but like let's let's kind of I want to put on the table that when the lines are drawn by companies, or by governments that are engaging in kind of efforts to educate that very much deepens distrust. So I want to see this be absolutely non partisan. And then the second thing that I think is important in the 21st century is we need to kind of re humanize one another. I think one of the big problems that you get on digital platforms is people are just routinely dehumanizing to people who have, you know, a slightly different view about the world than they do. You know, I understand that it's very easy when we're in a digital medium to see other people as not real. And I think you have these escalating cycles of dehumanization where someone like, let's take the anti vax debate again this one that I like because it's one that I like for a variety of reasons, including that it to some extent, it's not really linked to partisan issues because they were anti vax debates were prior to COVID. You had kind of this faction on the left, kind of like the, the, like kind of California, Neo hippies, and then this faction on the right who were both, you know, anti vax, so it sort of spanned the political spectrum. I would say someone's vaccine hesitant and they sit there, they express this, and have people kind of lash out at them, you know, accusing them of endangering kids, etc. They feel dehumanized, and like, in turn, they might feel they might be more susceptible to anti vax disinformation and be they might be much more likely to in turn, dehumanize someone else. I think that digital literacy is important, but also I definitely would just lay a stake down on re humanization of one another, which I think is absolutely possible in the digital realm. It's all a question of digital ethics, and that's also a second set of digital ethics in addition to better digital literacy that I would 100% like to see fostered. Absolutely, for both of you. So in three weeks we're going to have this midterm election which I think I may sabotage a number of the wishful, hopeful messages we've just been talking about. And then of course we have 2024 so I mean, I, is it possible to assess how things are going and just I mean that is a very broad question but I mean, you know, is it, is the situation worse better. It's the same in terms of misinformation polarization around these events and what's your anticipation about, you know, the risk of political violence around these events are generated by social media. And of course you know what what is 2024 look like I know those are very hard questions to answer but, but that's why you're on the panel. Yes, I'll take the first short short is cut. One thing. Okay, so let me just put in a plug for what the Biden administration has been doing that this is not a policy or problem that they're ignoring. There have been so many things done in true Biden fashion, you know, quietly, so many offices set up about disinformation, so many separate the disinformation board. You know, this is, you know, acknowledging these issues and, you know, proactively trying to think how to deal with them. I think that's actually a positive sign that people are paying attention and I think is a very responsible way, how effective that can be. It's certainly more effective than not paying attention and paying attention to late so that's the first thing. The second thing and I just want to bring this curve out here and I'm interested in what you think about this is that, you know, the cases before the Supreme Court. I'm not changing the topic but the cases before the Supreme Court are about international terrorism, which is very interesting because they're going to make a decision about this, you know, disinformation environment based on something that is really a non partisan issue, really, you know, in which there's sort of coherence about countering international terrorism. And what you're really saying is okay. Is this going to mean anything for the current environment and I'm not sure. And I think if it does have an impact, let's say on the 2024 election, it may not be a good one depending on what the court decides so your question is something to very much keep in mind. But I love this term digital ethics. I think it's great. I think I think it's, it's a little bit late to the game, but I think it's a really wonderful way to start thinking about things. So it's a wish and a prayer in terms of the elections Peter, I can't. I want to I don't want to say anything negative. So I'll leave that for it to be to weigh in. People always leave me to say the negative things. So in terms of your question, Peter, I think that there are two trend lines going in different directions. One would be the social media companies trend line, and the other would be the societal trend line. The social media companies I think 2014 to 16 was a low point in terms of both terrorist content and also missing disinformation terrorist content with respect to ISIS in particular, and missed disinformation around the 2016 elections. Since then I think social media companies have moved in a more positive direction. I don't even use the term cleaned up their act because I don't think anyone would buy that. I think that in terms of terrorist content, they're far better than they were at ISIS's height. And in terms of missing disinformation, they're better than they were in 2016. But how, you know, what they can do is captioned by the other trend lining, which is our societal trend line is very negative. If you look at polarization and distrust, you know, they're at all time highs and distrust and almost every institution across the board. And then, secondly, we have a fragmented information ecosystem, which is so highly fragmented that sometimes on certain stories. I realized that unless I read, you know, four or five different articles, I'm not going to get at what's going on, because you have like multiple publications that are giving completely different perspectives on the same thing. You know, again, depending on what the topic is. So I think those two issues of the fragmented information ecosystem and polarization distrust push us in a direction where it's really hard to get an overall handle on on miss or disinformation. Because they tend to exist, most virally, at times of high distrust. Yeah, let me ask, let me ask a question from the audience here which is, you know, there's been a lot of focus on terrorism. The recent driving political forces of anti content moderation arguments, not terrorism related but relate to coven misinformation or debate or debates around teaching what opponents have framed as critical race theory so I guess so. Yeah, I mean, is this being driven by coven is this being driven by what some worry that people are being taught the wrong thing things in school or what what are the drivers here for this. Anti censorship, quote unquote, legislation. Yeah, the biggest drivers are political. Clearly. So let me kind of make the strongest case possible for them because I think that when you're trying to articulate the reason you have to make the strongest case possible. So, after Trump was taken offline. You had a Republican Senator Lindsey Graham way in Twitter may ban me for this but I willingly accept that fate. Your decision to permanently ban President Trump is a serious mistake the Ayatollah can tweet, but Trump can't says a lot about the people who run Twitter, and you can look at various other cases of kind of what we could call what about terrorism but let's call it. What about is that has a legitimate point that Trump has taken offline. And there are multiple other figures who are were clearly their social media platform as part of how they're trying to do harm, and they're not taken offline. Then you have two other things which one can point to as where the line was drawn poorly. What would be the Hunter Biden laptop story, which came out just before the 2020 election. You know the New York Post reported on Hunter Biden's laptop. Later on, you'd have both the Wall Street Journal and New York Times after the election come out with stories about Hunter Biden's laptop. I'm not into all of the details there, but you know, I think that for me as not an expert observer of Hunter Biden's laptop. It seems that the line there was drawn wrong that the story was taken offline as disinformation hack material, and then later on the New York Times and Wall Street Journal reported something that was very similar with perhaps a few differences, and that was deemed okay. The third thing that we again we've talked about this in the study is the way that the Wuhan lab leak hypothesis for COVID-19 was suppressed by social media companies as disinformation. And I call it out at the time that that almost certainly seemed to be wrong. There was no reason like it did not make sense to me that you couldn't discuss the Wuhan lab leak hypothesis. And then, you know, flash forward another year, and the Biden administration is looking at it as one of the two most likely explanations for COVID, the other one being natural causes, and social media companies reverse their position. So I think all of those give fuel to this I mean the other cultural issues, you know, critical race theory is mentioned, and that falls in line with people feeling that perhaps the state is imposing a new ideology, and at the same time, using censorship, including through soft power means to try to get rid of opposing views. So all that together I think comprises some of the backdrop here that people feel that there is unfairness in terms of who has taken offline and who isn't. They feel that oftentimes social media companies draw the line wrong. And finally, they feel that there's kind of an informational cultural shift that's being driven by government. And if big tech is a sensorious arm of government, then suddenly you have both governmental power and soft power institutions implementing that. And so I think that all taken together is some of the driving mindset behind it. Obviously I'm against the laws and so I could provide some counters. But I think that's the strongest case I would make for them. I kind of think that a lot of things you described our symptoms, rather than causes, and that the last thing that you sort of came to which is this real sense of dislocation disenfranchisement is really central to these other issues of and these other angers and, and expressions of, you know, hate that there really is this grievance culture, which is takes a variety of different forms that you've described really has to do with the point to which the United States has come and a lot of it has to do with wealth distribution, and, and people feeling a loss of, of sort of ability to move ahead with their lives. And I think that is beneath all these other things you said about who has access to the vaccine, but I want to come back to something else that's kind of been here and I just want to make this point. You said, sometimes when you're, you see something you have to look to three or four more sources to see if it was good or you know what what's right. And I think a lot of us are in this moment, and I want to suggest that maybe that's not such a bad thing that the idea that you have to challenge when we talk about digital literacy. Right, it's not just knowing which websites it, it's to go on to. It's to know how to think about the world we're living in, who's saying what what it what it actually means and I think we're at a moment where this sort of critical thinking about absolutely everything before us which is so frustrating is not a bad stage for us to have us as a culture and a series and a variety of communities to have to go through. Right, maybe that's unrealistic but this is this is new. And, and it's really the only way to get through the bombardment of information, and the whole spectrum of information. Do you think that's two rose colored glasses. I think it's two rose colored. For me it's a matter of time triage. My frustration more than anything is, you know, I'm a busy CEO I'm running a company, and I don't really have time, generally speaking to read through half a dozen articles to figure out what's going on on a topic. And so, you know, when I realized that reading a couple of sources is probably going to confuse me more than it lightens. I'll usually take a low information diet approach to that. But I did want to highlight one thing that you said Karen, because I knew that we're right out of time but I agree with you strongly about kind of an underlying driver. There's a book by Edward Cornish called Futuring Edward Cornish is a famous futurist he's now he's dead now. But one of the, what he calls a super trend, like an overarching trend that affects the globe that he identifies is increasing deculturation or loss of traditional culture. And the way he describes a deculturation occurs when people lose their culture, or cannot use it because of change circumstances. So when we discuss deculturation, when we go to a country where people speak a language we don't understand, or do things differently from what we are used to. Many people today experience culture shock without ever moving to a foreign country. Instead of new culture takes over much of their homeland, with the result that the original inhabitants become surrounded by people who do not share their culture. It can be through immigration or demographics but it can also just be by culture shifts, you know, that are occasioned by, you know, universities or elite institutions or the like. And so to me, like, you know, occasionally we dive into deculturation and its relationship to terrorist trends and the like. And I definitely would would underscore the idea that deculturation is one of those major factors that's driving polarization in the country. And with respect to what I said earlier about humanizing re humanizing one another. I think that that the more we can try to understand underlying causes and be compassionate about very different perspectives during this wrenching time, the better we'll be able to navigate some of these effects these these poisonous effects are rising from them, like people buying into miss and disinformation, people turning to violent extremist movements and the like. Quick final thoughts in the next two minutes. No, I mean I, I do think we're in a moment of tremendous grievance on all sides. I don't think we can go backwards I think we have to go forwards. I think all of these challenges at the courts are taking up in terms of section 230 and the larger issues of how what speech and this current environment are incredibly important, but I think that the that the actual solution to this lies outside of the particular topic in the larger context and you know there's any takeaway from from this conversation this idea of, I don't even want to say re humanize but the idea of humanizing the world we live in, when we're having a discussion about big tech is actually I think something we all need to give a lot of thought to you know one final observation perhaps is that you know political violence is the American story at periods of non political violence or the aberration not not the norm. I'm always reminded of Charles Sumner you know he was nearly beaten to death on the floor of the Senate in 1856. And because we constantly sort of talk about let's get back to how we once were and that was the group you know we've we've always been at odds with each other I'm not sort of advocating position I'm just stating it and. And I think that we tend to sort of over emphasize the moments of of comedy, which are actually kind of unusual. I mean we can only hope that there's more more comedy in our future. But I, you know, as an historian, by at least temperament. I'm skeptical. Anyway, I want to thank our brilliant panelists to the garden steam Ross and Karen Greenberg. This was extremely stimulating conversation. Thanks to the audience for listening and for the questions and look forward to seeing you at our next event.