 Welcome, everyone. Welcome to the first free speech project event of 2021. For those new to our events, the free speech project is a collaboration of the tech law and security program at American University Washington College of Law, which we launched just a little less than a year ago just in time for coven to hit. And also future tense, which is itself a collaboration of slate New America and Arizona State University. We are going to have an amazing panel here. There is no shortage of issues to discuss. And so I'm going to talk about some of the events over the past few weeks to put things in a bit of a context and then we are going to launch into a discussion. I encourage all of you to jump in with questions. We will definitely do our best to get to audience questions as well. So as everybody knows section 230 has been really everyone's favorite punching bag for the past year plus. There have been multiple bills introduced in the last Congress tackling aspects of section 230. And even at the end of 2020 as coven was raging and Congress was trying to pass a stimulus package and go home for the holidays. President Trump vetoed what is generally considered a must pass defense authorization bill, because, among other things, did not include a section 230 repeal. The veto was overwrote was overridden by Congress and the bill ultimately passed, but this was then followed up with the President wreaking havoc on the stimulus negotiations, saying he would only sign the bill that was presented to him if there were bigger checks to the individuals to individual households, the House of Representatives said sure. And the Senate said no, we will only do that if we also include a repeal of section 230. Payments didn't actually get passed. Again, President Trump ultimately signed the bill and the crisis was averted. But in both situations and in a range of other situations over the past year or so section 230 has played a really major, and I think for many a major role. And then of course we have the horrific events of last week, which then led to the follow up decisions by Twitter face back and Facebook and others to ban Trump. This was followed on with actions against platform, a variety of actions that led to parlor being more or less deplatformed a decision by Amazon Web Services and others to take action against the against parlor. So Web Services was sued. And again, in its case relied on section 230 in its defense. So why is section 230 so central why is it on everyone's mind and what should we do about it and what do we think the Biden administration is going to do about it. We have a really amazing panel here to discuss these issues. I'm going to introduce them very briefly in alphabetical order by their first names and then we are going to launch into the discussion. Hello, he is the director of the Center on Science and Technology Policy and an associate associate professor of practice at Duke University. He previously served as director of public policy of Facebook. And I had the privilege of getting to know him about five years ago in the context of a series of working groups meetings on law enforcement access to data across international borders. It's great to have you here, Matt. Mike Godwin is an internet scholar, an attorney, an author, a visiting fellow at the Yale Information Society project. He is also a member on the board of trustees of the Internet Society, and he has held top positions at the R Street Institute, the Global Internet Policy Project, the Wikimedia Foundation and the electronic frontier foundation. And I highly recommend his most current book. He's a prolific author, The Splinters of Our Disconnect, How to Fix Social Media and Democracy Without Breaking Them, which came out in 2019. It's great to have you here, Mike, as well. Thank you. And last and certainly not but not least is Victoria McCullough, who is Automatics Director of Social Impact and Public Privacy. She previously worked for the Chicago firm Organization for Action and served in the White House Office of Public Engagement and the Department of Homeland Security's Office of Congressional Affairs under the Obama Administration. It's great to have you here as well, Victoria. Thank you. I'm going to turn to you, Mike. And so we're talking about all these reforms and potential repeal of section 230. But before we talk about changing it, let's talk about what it actually is. So what is section 230, how does it, how does it work. I designed my remarks to give you basically half a century of First Amendment jurisprudence in like three minutes. So I'm going to try to just rush through it really quickly. But I'll say at the outset something that's coloring today's discussion of section 230, which is that there's a strong tendency and policy and lawmakers circles to see the world of the internet as basically following one of two models, either the traditional publisher model in which the company is controlling all the content from top to bottom and has legal responsibility for it. Or the common carrier model, which is more like what the phone companies do or telegraph companies, or even the postal service or federal express where they don't make content decisions, but they're also not responsible for it. So the tendency whenever you talk to people who are new to this issue is to say, which, where does Facebook fall, where does, where does Reddit fall, where does Wikipedia fall, and the answer is, mostly it doesn't fall squarely in either of those two categories. So fortunately, in our First Amendment jurisprudence, there's actually a third model that dates back. It's officially recognized by the Supreme Court in a 1959 case called Smith versus California. And that is the bookstore library newsstand model. It turns out that we don't want bookstores libraries and newsstands to have to carry everything. But we also don't want a library that removes some books or a newsstand that decides not to carry a magazine to be legally responsible for everything that it carries. And the bookstore model was understood by the Supreme Court in 1959. I was three years old that year. So that's how long ago that was as something that's really central to the kind of public discourse and reading and public education roles of the First Amendment. And it's a third model. And so the question that happened when the internet became much more central and digital communications became much more central to American life was what are we going to do about CopyServe in America online. They're not quite traditional publishers, but they're not quite common carriers either they do make content decisions and in a case called Cubby versus CopyServe 1991. CopyServe said we're more like the bookstore of Smith versus California. We're more like the libraries and newsstands that are protected under that this distributor model so far so good and I had just become a lawyer in 1990 so that was really great for me I thought. Great this issue is settled we'll never have to talk about it again. Oops. No, in just a few years later Stratton Oakmont which you know from the Wolf of Wall Street movie sued Prodigy another online service. And for some reason the judge in that case decided that despite all the jurisprudence that said there are there aren't just two models he decided that there really are only two models. Common carrier model of phone companies or the traditional publisher model of the New York Times. That decision was hugely influential even though it wasn't binding because there aren't weren't many cases about online media. What happened is that Congress in that very period was considering a lot of telecom regulation in a 1996 telecom reform act and what got added to that reform bill in 1996 was what we now call section 230. It was originally part of the communications decency act and the idea here was that Congress wanted to enable online services to be able to make content choices, you know for G audiences or to remove controversial content or troubling content if they wanted to without acquiring liability for anything users generated. So that's how we got section 230. And the important thing to remember here is that section 230 didn't come out of nowhere. Section 230 came out of Congress's recognition that cubby versus copy service right that Smith versus California was right. And there are other models where we allow service providers to curate content without acquiring liability. So that's where we got section 230 and where we are right now, just in a couple of sentences is people on both sides of the aisle think still that there are only two models for how to regulate online internet companies. And our big job if you are our biggest job when we talk about section 230 is to try to get them out of that bipolar disorder as I call it, where they think it has to be one or the other model there are other models, and section 230 has done a good job of building important places for Americans to meet each other and talk about anything they want to and produce their own content and not always by the way to plot invasions of the capital but also to share really positive content and and and create communities and I'll stop there. Thanks Mike that's super helpful fabulous overview. Victoria. So, I mean, when we think about the debates the current debates about section 230, you know, we have, as Mike just said you have both the right and the left kind of calling for either reform or repeal for a variety of reasons. The right there's been concern about conservative bias or an alleged concern about conservative bias for a long period of time from the left you often hear concerns about misinformation disinformation hate speech arrange other concerns. But most of the discussion on both sides, often focuses on the big tech companies on Facebook on Twitter on YouTube, and a concern about what those big companies are doing or not doing in terms of content moderation. What's left out of the conversation a lot is the perspective of smaller and medium entities like automatic and so we'd love to hear your perspective about how what what the concerns and considerations are for more usage and what something like a repeal would mean to a company like automatic, for example. Absolutely. So, I want to first mention, you know, there are, I think for a long for the last couple of years, not only automatic but several similar size medium to small size companies were, we're witnessing sort of the debate run away towards big tech and often, you know, when we were engaging Capitol Hill staff when we were in you know engaging civil society, you know there was often this assumption that essentially all platforms were made the same that we operated the same way. Oftentimes to there was a sense that we had the same size, trust and safety teams who are essentially they're sort of moderating content. And where we've been the last couple of years and where what we several of us have come together in the last couple of weeks to actually launch a coalition that is that is really dedicated to representing this medium small sized companies who are often left out of that conversation so internet works was something that we launched along with Pinterest, Reddit, GoDaddy, Cloudflare, several numerous, I think it's, you know, up to 16 companies now to really represent that and part of that was one to really move to this point of just, I think in the in often in the beginning there was sort of resistance to don't touch 230 at all. And I think that for us as a as a coalition for as automatic. There is a very real interest in engaging and working hand in hand with Capitol Hill staff to really dive into our processes around content moderation and really engaging with them on ways to create. That are out to thoughtful legislation so I think there's a, I want to mention just that I do think there is this movement happening that that is trying to draw a little bit of that distinction and really show the diversity of the tech sector. And the problems that come with it. So with that, our two biggest concerns are liability for company like automatic which I should mention wordpress.com and tumblr are probably the two most relevant for this conversation. You know, we, the biggest concern is that we open ourselves up for liability. You know, we have small, much smaller legal teams and the likes of Facebook and Amazon. Eventually could be devastating. If anything to I think you hear on the other side, a real concern about freedom of expression so as, as companies of a certain size, you know, grow concern that they're going to open themselves up for liability often it could mean restrictions on speech so I think there is that is those two that that combination of live opening ourselves up for liability. And then, and then really seeing restrictions on freedom of speech speech as there is there are concerns around the fact that you know our teams are not whether it's legal teams or trust and safety teams are just not quite the same size. I'll give you an example and I only give you as an example because it's the only big piece of legislation that I think we have seen come down across all the platforms. But I wanted to, I think it gives a sense of sort of the size and the different the differences here within GDPR. So many of us are supportive of privacy as a whole so this is not a common GDPR privacy, but we what we saw is for Tumblr. That was almost an entire year of our entire engineering team working to ensure that we were up to speed on that. Imagine for a Facebook or a, you know, or even a Twitter to a certain extent, you know, they were able to they're essentially able to have an engineering team dedicated to ensuring that they're going to be compliant. But at the same time, they're also able to continue to create new features, continue to stay ahead of the competition. And ultimately, I think squash, you know, it sort of prevents the rest of us from competing in some ways. And not to mention companies that have not even been being, you know, not even thought of yet or just getting started. That is just an incredibly fraught challenging environment to walk into. So I think the biggest thing that, you know, that we're concerned about is that, you know, it's this liability, freedom of expression conversation but also the fact that we're big, you know, we are big fans of innovation and diversity and then in tech and much of our business is sort of thrives on that that type of diversity and I think if legislation is not thoughtful, those of us representing the medium and small size companies will be the ones likely to get hurt but I also think you will see, you will see a real hesitation in a slowing down of the type of innovation and diversity that we've seen in the last, you know, 20 years within within the tech industry. So that's my, that's my soapbox there but I'll get off, get off now. So just one thanks Victoria and just one fellow question to talk about the risks of liability and I think some might listen to that and say okay, sure but there's certain circumstances where companies are facilitating the spread of really harmful horrid content and shouldn't they be liable in those situations. This, this is a big, this is a big effort for and a big focus for internet works, we are are very supportive of working with Capitol Hill to really encourage transparency across all of our, whether it's our trust and safety process how we how we do business algorithms and I think there's a big conversation and an industry standard that is really come about in the last several years that includes, you know, how we, you know how we were presenting to the public and to our users. So I think that transparency conversation is a big is a big piece of that conversation so yes. I think what you will find is that there are, there isn't a, there's a real effort, I would say on the most part of most platforms to stem that the tide of that type of speech, or whether we're talking about hate speech, or, or incitement to silence some of the conversations we've seen the last month I think there was a real effort to, to stop that but a lot of that happens, often behind the scenes with our content moderation process and I think the big piece there is to really open ourselves up for examination and to work on educating both the public and then and then certainly legislators on exactly how those processes work. But I think there are, there are absolutely parties that that I think need to be examined in this place and need to be held responsible or accountable, so there's a great number of us who I think are really working to ensure that those types of communities that I think you're referencing don't thrive on our platforms and I think there is a lot a tremendous amount of work happening to ensure that that doesn't happen. Again, I want to only speak for for automatic, but I think to a certain extent this is, this is happening at other levels to where there is a great investment to ensure that those types of that type of activity that we've seen in the last couple weeks just does not again thrive on our platform so hopefully that kind of that answers that question a little bit. I want to bring you into the conversation. I know you wrote a fabulous piece for the day one project for those who aren't familiar with that it's a project, kind of collating and putting out a range of policy proposals for the day one of the new administration by a range of experts, and that wrote a great piece on section 230 that I suggest everybody read. I want to talk through a little bit about your suggestions and your approach, your recommendations for the Biden and Biden Harris administration on day one. Yeah, and it's exciting that day one is seven days away. I think the best events to do are the ones where you learn from your other panelists and so Jen, your opening and Mike your overview of 230 and Victoria your comments about the implications for smaller companies then then the biggest tech companies are. I've learned a lot already so this is a fun event to do. So in the, in the paper that I wrote for the day one project I proposed five possible ways to get at some of the concerns that people have about section 230. The first is to modernize federal criminal law for the digital age. So the important thing about changing federal criminal are creating new federal criminal law is that section 230 does not allow platforms to to use 230 as a defense for allegations of federal criminal law. So if there is existing federal criminal law and platforms are brought into court for potential violations of it, they can't use 230 as the defense. And there are many areas of federal criminal law that I think are right for reform and updating to deal with some of the new challenges that we have in our digital age. What I think is voter suppression and voter fraud and I think this was astonishing to me in doing the research for the paper. There are many state laws on deceptive practices and voting but actually no federal law on deceptive practices. And I think if we're serious about things like voters, voter fraud and voter suppression, rather than just looking to platforms to try to take action against those things we should actually pass federal criminal law, creating standards of liability that platforms would have to and then for potential allegations of where platforms actually have been complicit in voter suppression, they'd be unable to use section 230 as a defense. The second thing that I raised in the paper which I think came out maybe in late October or early November. So before the events of last week but is now I think particularly ripe is looking at federal statutes on incitement to riot the anti riot act and talking about how those statutes might be brought up to date to deal with some of the events like the ones that we saw last week. There's ongoing conversation about the role of social media platforms in offline violence. I think that's an appropriate conversation to have, but we shouldn't just expect platforms to make the rules of the road in this area we should look to Congress to figure out how to do this effectively and then for courts to offer guidance on when such statutes are run afoul of the incitement. It does seem to me like there's room to to update the anti riot act within the First Amendment although obviously the First Amendment would require Congress to think about anti riots anti riot statute in a relatively narrow way. The second reform that I suggested would be for Congress to pass the pact act, which was proposed by Sanders shats and soon, and would do a variety of different things but one of them is would remove platforms ability to use section 230 as a defense in cases where they were on notice that a particular piece of content had violated had been held to be illegal by a court. That seems to sort of make good sense. Those platforms do that already they they don't they don't continue to host content that's been held by a court to be illegal, but formalizing that I think is sensible. The third reform that I proposed is is doing some additional work to outline the delineation between hosting content and creating content the core of section 230 is about that distinction. If you're merely a content host, you can use 230 the defense, if you are actually creating the content yourself, then you cannot. And yet we don't, I think have very clear guidance from the courts about exactly how to draw that line and we in particular don't have that guidance given the evolving nature of technology so there are lots of allegations now for instance that certain kinds of algorithmic preferences or algorithmic sorting should be considered content creation. I think that's probably not the right way to think about it but I think it would be helpful to have more people thinking about it and more normative guidance about how to think about that line. What I proposed in the paper was a series of FTC workshops to try to develop some non binding guidance in that area. I'm not sure that's exactly the right way to develop the norms I'd be curious what you, Jen and Mike and Victoria think about how we might do that but I do need more guidance about about what what that line looks like. The fourth thing I proposed was designing products to facilitate individual accountability so platforms have in the past experimented with different types of reporting flows to actually enable users to report problematic content, not just to the platform but to others in the community who might be able to resolve disputes. I think platforms could explore things like reporting functionality to attorneys general for instance, so that attorneys general could review content review cases and consider whether to bring cases that would that's a way to facilitate accountability without fundamentally altering section 230. The last and I think this is something that there's ongoing conversation about in a variety of different context is improving data sharing so figuring out more ways for platforms to share information with researchers doing more in areas of transparency like Victoria described. Right now I think it's easy for lots of people to critique platforms on the grounds that they don't share enough data. A lot of those, a lot of those conversations amount to, I'm at a reputable university like Duke University, of course a platform like Facebook or Twitter should share data with me. And the problem is that that platforms aren't protected when they engage in that kind of share of data sharing the Cambridge Analytica case is one example of a platform for enabling a researcher at a reputable university to get access to data. And then that researcher abused abused the data, abused the data that he received from Facebook and there were significant negative consequences for users and for the platform as a result. We need to figure out a set of best practices that would enable platforms to share data responsibly and then, and then ensure that once platforms do that they're not held liable and so if the data moves from Twitter to Duke, then Duke would be liable once Duke is in possession of the data. So I think we need comprehensive approach to data sharing that would facilitate the kind of transparency that Victoria describes. Great, thanks Matt so one of the I think one of the interesting things about about your kind of list of priority reforms is that a lot of them actually aren't about section 230 at all they, they are around section 230 and I think that's one of the more interesting and kind of. I mean, both in some ways kind of helpful but also kind of troubling aspects of this debate is that section 230 has become kind of, as I said earlier the punching bag but it's really often not the concerns are not necessarily things that section 230 is responsible for or that reforms as section 230 would resolve. And so, as I think we go forward and as the Biden Harris administration goes forward and starts working on these issues I think it's going to be incredibly important to think about section 230 as part of a holistic context and the range of other reforms around the edges that also begin to address the very real problems that I think have been identified that have led to such a push and call for various changes and reforms to section 230. So with that Mike I want to give you a chance to to both either to react to some of what Matt suggested or to talk about what you think if anything might be the kind of sensible reforms that we should be looking towards going forward. Hi, so I have a lot of thoughts as you know Jennifer in my book I actually talk a lot about how to create kind of a social public policy consensus about dealing with these issues. And one of the models that I've really drawn upon. First of all, I want to say that I some of the ideas I have are similar to what Matt suggests with regard to convening forums, you know, trying to understand ways, you know, developing best practices. One of the models that I thought was very interesting and maybe biased because I'm a lawyer but those of us who are lawyers know that we have an addition to the fact that we may be in commerce and we may be generating business and generating profits for our firms or whatever. We also have ethical obligations that are above and beyond that and they scale. They are not just big firm obligations versus small firm obligations they are ethical obligations that work for solo practitioners and that work for big firms and my idea one of the ideas that I try to develop in my in my book from 2019 is that the this ethical framework is something that is bigger than law practice it's bigger medical practice has it to other professions have have ethical practice standards to and the understanding here is that when you are sharing personal information. When you're sharing private information or particularly detailed information about the people who are using your services, you ought to be ethically bound not just to not make, you know, not to take advantage of them and not to manipulate them and this actually affects things like algorithmic serving of content. So if you are contracting with advertisers to influence people through you know propagandistic technologies, you know you have an ethical obligation not to do that just as I have an ethical obligation to be straight with my clients I can't deceive my client in order to make more money, and I can't violate a conference we have these duties and doctors have the same duties. So the, the big idea is that it is certainly possible for the tech industries and the tech companies large and small to agree on ethical standards for treatment of individual users. And in fact, I think you can do that and also meet many of the obligations that are proposed under various implementations of the general data protection directive, the general data protective protection regulation. So, so that's my big idea but I also want to say, in addition, that if you talk to the drafters of section 230 if you talk to Cox and widen today what they tell you is, we always intended for providers who actually participated in shaping content, not to have the protections of 230. They, they, they say it again and again they've said it recently and congressional testimony that they believe that section 230 should not protect a company that is that is engaged in creating content that is illegal, or otherwise a problem. They that they, if they if they shaped it in some way or if they triggered it or encouraged it that that means that they can't partake of the protections of section 230. So, I'm sort of pointed you in the direction of some different approaches that I have some of them align with maths and also I think I'm trying to accommodate Victoria is quite appropriate. I think the issue that she raised regarding the need to address small and medium size enterprises as well as the giants. I think we have to do that as well. Mike just just to follow up on one piece of what you were talking about the an exemption that was maybe intended from the beginning but has not been interpreted that way, but an exemption for those who play a role in creating that content. So is that when you when you talk, I mean obviously how you define what it means to create that conflict content matters enormously, right, and that kind of recommendation so you know do you think like algorithmic pushing of, of certain information, the creation of news feeds in a targeted way is that creating information. So how would you think about drawing that line. I have to think about drawing that line because we have to, we have to not just assume or default to the idea that because a mathematical process made some decisions that gets everybody off the hook. I think what you see, I think there's some guidance to be found in the roommates case where users were essentially steered into expressing racial preferences and violation of civil rights legislation. I think that the you know I think the roommates case came out correctly and roommates could not avail itself the company could not avail itself of section 230 protections because they had participated in shaping the content. So, I don't want to pretend I have a one size fits all answer what I really want to see is are several answers that fit everybody of every size I think we have to recognize that if we regulate everyone is if they were Facebook or Twitter, we're going to lose a lot. We really need to recognize that did there are different models out there in different enterprises and we have to come up with a system that adapts to stuff that up and down the scale and I think ethical approaches are a way to do that. And by the way and I really have to stress this because sometimes I get misinterpreted on this. It's not only about self regulation or at least not at least at least not only about self regulation. I think ultimately as is the case with ethical practice and law and medicine. It's important the regulations be informed by and reinforce the ethical standards that are set industry wide and profession wide. And give you a chance to jump in and pull out also on a question that came in from from one of the audience members which is, what do you have, what's your suggestion for an approach that would achieve some of the concerns but also still accommodate the small to many of tech companies and then there was a follow up question to that as well which is, you talked a lot about transparency and your last answer but there was a skeptical audience question what could just, you know, if there's not clear standards and rules. How good is transparency. What does that really get us where we need to go. Yeah, I left one of the key things I think I left out before talking about transparency is just that a large, most of us automatic included have very clear community guidelines that that really guide how we shape our community and that includes any that you mentioned earlier hate speech incitement to violence spam like you name it those those are increasingly the longer the internet is live increase that the more that those sort of increase but those of that is it. I think a big piece of this and I think you're seeing an industry standards sort of develop over that, but I think the key thing is here ensuring and finding ways to ensure that those are that that platforms are enforcing those community guidelines and that's been, I think, one of the things that the public has rightfully been concerned about. So do, can you repeat the first question and because I went in on the second question first. Yeah, no, just suggestions what's your thinking about how you can address some of the concerns the forums that also accommodate the interest of small medium. So I will be that's the reason I forgot that question is because it's such a big one and one that I wish that many of us had the answer to one of the big thing that and Mike this came up, you know, I thought of this when, when you were speaking, but I think a big piece of this is the diversity of the content moderation models I think something that the platforms we really, we did not help ourselves and really going out of our way whether it was to the public whether it was to legislators. But to really explain sort of how this works how we approach really ensuring that we are prioritizing user safety. I think their tumbler has really tried to write our terms of service or community guidelines in in ways that that makes sense and then we ensure that we're communicating with our users on a on a fairly regular basis through transparency reports to say hey here's what we've done responding to specific concerns or report user reports of activity that is breaking is violating our TV guidelines. I think that it is that model is really different and throughout all of the throughout the tech sector and platforms. So this is not an answer to the question but one of the things I am hopeful about. And one of the reasons I want to be really careful, especially in light of events. Last week that there is not a knee jerk reaction and an assumption that if we just think of section 230 repealing revising section 230 as this magic bullet that I do think there's a lot of danger there but my hope is that as we lean into transparency we can explain more clearly and carefully and thoughtfully how we approach prioritizing user safety how we approach cultivating thoughtful supportive communities on our platforms that there is a way that with whether it's with the helps with the help of folks like Matt to Capitol Hill staff that the more information that we're sort of able to share that collectively there is a way to come up with whether it be legislation or solutions or industry standards, they can kind of help us come up with the right response here. The, so the short answer is that I wish I had answers here I know what we do I know that there is a ton of thought and investment that happens on our the part of our team to ensure that we are doing this in the right way. Part we rely in the case of Tumblr, you know we really rely on our community trust and whether or not our users stay and thrive depends on whether we are doing this really well. I, so I think I again for a lot of us I think we're really trying to do it right. The other thing that you're seeing happen to Mike's point is there is a real movement happening in tech around product ethical product design we're seeing this globally. And a lot, many of us internally at many of these platforms are really trying to lead the way in ensuring that, you know, I would tell you five to 10 years ago, engineers were just coming up with features and trust and safety never would often never have those. And now there is a real process I think is starting to happen at many of these platforms, including our own where we are putting a ton of thought, and there is a very thoughtful conversation happening between trust and safety, and our product designers to make sure that whatever we're putting out there, you know, really minimizes the chances for for abuse so a lot of that I think is already happening organically. Do I think there might be a way to implement that more as a standard at apps. I would leave that to the folks smarter than me to maybe come up with that but I do think that some of that is happening. The last thing I just I wanted to mention really quickly to Matt's point earlier. What I think is might come through in some of these more transparent conversations is often it. And again, I will speak more for tumblr here, often user reports are not reflective of what what the concerns of the users are so I'll give you an example, you know, in many cases around hate speech, we often see users and you can imagine there are a lot of young, some of our youngest users, you know, find ways to abuse the trust and report the trust and safety reporting system so for, you know, last year, you know, many times, one out of two reports was simply a user. And this is a very tumblr thing to tell you but there is something a movement on tumblr called shipping, where basically they put different characters together and put them in a relationship it's called shipping, and it was essentially communities between the two just decided they did not like the other shipping community, and what often just flood our user queue user reporting cues with just dislike or sort of hatred of another of another another community. So many cases I think there is this reliance that we're seeing and I do think this is very much the case with pat that a real reliance on you assuming the best in users and that there is not a level of trolling, not creating sort of a mechanism for easy trolling a mechanism in which you know users are able to really flood and sort of get in the way of trust and agent trust and safety agents really doing their job and I think that's one piece that I think will come through and in some of these more thoughtful conversations that we're having, but that is often sort of underestimated is just the abuse that I think often happens on the trust and safety side and user reports as this clear obvious way of deciding. Okay, this is, this is a problem here often it can be. It can be abuse and and trolling that happens in that and that that can be challenging when you talk about with packed act, creating essentially a hotline that would allow a user to call in every time they they see something that either they think violates or maybe they just don't like. Yeah, yeah. Sorry, is it okay if I jump in. Yes, please. I really appreciate and applaud Victoria's humility and sort of singling that at this point we may not have all the answers. There are a lot of people in this debate who I think purport to know the answers and even even I would put myself in that category to some extent, using a paper on section 230 reform. I don't think we know the answers I don't think we know how changes to the statute or even changes at the edges of the statute as you pointed out. You know many of my proposals are sort of just the edges. We don't know how they would unfold we don't know what the impact would be on large platforms we don't know what they would be on medium sized platforms we don't know if it would make the internet safer or make the internet let less safe there are past examples of reforms that have actually hurt the constituencies, they were designed to protect and I think that's something that we want to avoid going forward. And so I actually would push back on the second question that that was that you asked Victoria about the idea that we need to decide on the norms and rules first, and then figure out transparency second. I think that there's lots in this area that we don't know, then we need to take a more experimental approach to policy, which means getting good data that will inform future policymaking. I think that's a really critical first step and so from my standpoint, the issues around data transparency and data sharing that would enable researchers to really look deeply at these issues and actually really understand what the problem is, and then how various different solutions would either exacerbate or help to address the problem seems to me to be a critical first step. Thanks, and I just want to make sure I just wanted to pick up on the discussion and also came up with some of the questions from the audience about the Pact Act which is the shops and Thune bill. And I think that there's two pieces of it and Matt correct me if I'm wrong but I think the piece that you were applauding or suggesting in your comment was the piece of the Pact Act that says that if there's a court order that platforms have to take action, versus a separate piece which I think, as you pointed out raises some real questions about burden and effect about a hotline and a required response for every user complaint that doesn't have that additional layer of court review. Matt is, am I correct in my understanding of your. Yeah, I think that's right and on the court order part it would, I don't think it requires platforms to take action it just removes 230 as a defense, if they don't take action. Which is a default way of saying platforms needs to take action. And I should be really, I think it's worth mentioning here, I think especially for, you know, for us and I would assume it's the case for for smaller platforms, you know, we're really low to throw out 230 in response to court orders I think there was and I don't think it's less the case now, but there was often sort of this perception and again I will not speak for big deck, but for many of us I think we were very we off we we weren't really hard to try to resolve in other ways. Those court orders or any concerns that were we're getting from users, whether you're talking about copyright or or legal or content that's violating our community guidelines. But, you know, I think that there is, there is often a perception that 230 is just slapped on to sort of every, and Matt I'm not suggesting you were saying that but I want to mention it here because I think that from the public perspective or others and even some hill staff just assume that that's often our default response and I would say in many cases that is not it were very, it's very rare in our case that we're throwing out 230. Let me just jump in really quickly and point out something that I think a lot of people haven't considered which is that a service that is a nonprofit that is valuable to literally everyone here on this panel and everyone. There's Wikipedia where I was general counsel for a few years. Wikipedia survives on section 230 and the reason it does is that it is a crowd source information resource. Users generate the content if Wikipedia were suddenly held much more legally liable for what its contributors did you kill Wikipedia and I think we mostly don't want to do that. It's a little bit healthy with what I call the bipolar disorder isn't it's a little bit like saying, you can live in two places and here are the two places you can live you can live in a sewer, where everything goes by, or you can live in an army barracks where everything is really order and save and very top down structured and control, but I don't think anybody wants to live in either a sewer or at least most of the time in an army barracks. What I want to do is have services where if they make decisions to protect users and they can make me different decisions for different users, if they make decisions to protect users or to take care of them. They are not legally responsible for things that other users do, at least not by default. That's that model I think Senator Schatz's proposal, Senator Schatz has always been very thoughtful about this stuff and Senator Thune as well. I think that's really interesting. And I also think that the people who want to repeal section 230 politicians have not really given that a lot of thought because many of the politicians who propose repealing repealing section 230 would not be able to have their content hosted. If there were traditional liability rules in place. Yes, it's an important point and I'll just, and this just is a response to one of the audience questions as well. As Mike just pointed out that the Pact Act is a bipartisan act it's it's Schatz and Thune. And there was a question one of the audience questions was do we do we see any hope of bipartisanship, given that while there's lots of talk on about section 230 from both sides of the aisle. One of the reasons the motivation for wanting to talk about section 230 is often different on the two sides of the aisle but there are efforts ongoing efforts in Congress that do engage in a bipartisan way and so just wanted to emphasize and put kind of you know one of the questions from the audience as well as what is there anything that we can glean from what's going on in Europe so Europe just introduced their digital a draft of their digital services act, which focuses a lot in a lot a big part of it. Victoria I think matches some of what you were suggesting which is it leaves more or less the liability protection in place and focuses a lot on transparency and notice to users and accountability imposed on companies that fail to abide by to some extent their access to the users and fail to put in place, but might be called kind of due process protections for users. Is that a model that we should be looking at, as we think our way through this a model through this. Anyone, anyone want to jump in. Which I just two things from from my end we're, we're slowly kind of engaging with some others, other companies in the EU around the digital services act around that sort of consultation conversation. I think a couple things that have stuck out to me I think there's still some flaws in, in the way the, the approach that we're still pretty deeply concerned about from from our end and are trying to engage, but to your point, Jen I think that on the transparency that there is, there's a lot to a lot of things I think for us to work with and, and hopefully build on as we engage so I'm hesitant to say that it's, it's like a perfect blueprint, but the EU I think has demonstrated to a certain extent that they've, they're slightly ahead of us I think in the, the depth of their examination and the diversity of the models across a lot of the platforms which is, I think we're, we're still a little bit farther behind here in terms of the understanding of the diversity, but Mike and Matt might have more thoughtful comments. I just wanted to say that I agree and I just, just from your perspective one just following question how important is that for the EU and the US to kind of match up as, as these rules move forward. Thank you for asking that question I need to, I need to mention that always huge I think that for this is another angle that you can imagine could be devastating for innovation and competition. There, if there is a move of, I think, for certain models, especially if they're vast distinctions for a company, you know, like wordpress.com or tumblr if you have very different structures and need different compliance needs in different parts of the globe they're always going to be slight distinctions, but I think if they aren't the EU there are two you know sort of major Western models aren't talking to each other I think that could be deeply concerning and ultimately open up more really I think discourage competition because as you see companies feeling if they want to go go global, it could offer up just a ton more liability but also, you know, maybe you see some companies are just being like we're not going to go global we're only going to be here. And I think there's that's a loss for the industry when it comes to just discouraging competition more generally or really discouraging, you know, companies smaller companies from really expanding into two other countries and. That's how it would be my response to that. So I see a lot of hopeful signs and development and law and policy in the European Union I haven't always been able to say that, because sometimes, at least at various levels and in different years in the past, EU regulators have wanted to have almost been reactive against American companies, American tech companies, and I think that hasn't been helpful, but I think what you see now is a little is an in maybe not increasing fast enough, but an increasing awareness that individual speech rights among European citizens could be affected if if the regulations require the companies to control their content more. So that turns out to be useful. But then the thing that I think is less helpful is that there have been various proposals and different sort of EU centric and conclaves that might be based on size so that if you reach a certain level and a certain number of subscribers then you might be subject to more regulation. I'm not proposed to that in principle but I just want to point out that companies can structure themselves to stay below the critical line. I mean that's not that's not a hard problem for companies to do. So that is so size as a kind of a dictating criterion is not as helpful as some people might think what you really want to do is try to find approaches that do scale up and down, and so that companies can grow and companies don't have disincentives, if they're successful to grow, but also that they're not structuring things so as to dodge legal responsibilities we really want to have responsibilities generally speaking to take care and I call them fiduciary responsibilities to take care of users and to me and to be good corporate citizens we want to have those all over the place. We made it we made it to 1254 without really without yet talking about the Facebook and Twitter decision so we can't and also the issues with respect to parlor so we can't in this conversation without spending a few minutes on that there was one question about whether or not the decisions of big tech raised any First Amendment concerns I'm just going to deal with that really quickly which is answers clearly no these are private entities, they are not bound by the First Amendment, and whether or not section 230 is in place or not that would be the same answer. Matt I want to I want to turn to you. There's a question a couple questions from the audience range of questions from the audience about these matters both about whether the tech companies were to lacks previously and failing to hold President Trump to some of their terms of service and whether or not they did the right thing in the last week in in the decision that they made decisions that they made. Yeah, so I think it was probably the right decision in the short run I mean I think the political and PR equities were sort of all on one side of the equation but I'm concerned about it in the long run I don't think it's going to be good for the future I think it, it's going to create some momentum amongst Republicans to potentially do things that are onerous for tech companies on either from in terms of antitrust regulation or in terms of speech regulation. I generally don't think that censoring the president, even a president who I vehemently disagree with as much as I disagree with President Trump is the right move. He got 74 million votes in this election 46 or 47% of the population voted for him. I find his views to be abhorrent but he's not a fringe politician in America, unfortunately. I think it's important that we see this his speech as disturbing as it as it may be the American free speech tradition is to not generally not punish speech instead to punish conduct and I think that's, that's the right line to draw. Mike Victoria lessons from from the last few weeks. Yeah, I'll just give you. So, as you know, and as we all learn, you know, as we learn as law students and it actually as as we learn in civics class or political science classes. That is not limitless. It's not they're all there are there are exceptions to the general rule that you get to say what you want. And I think that there's a particular issue that comes up when people who have when people who are holding authority positions seem to be calling for violent action. We actually have built into the American system provisions under the Constitution law that address that very set of issues. We have them both in the federal criminal code and also within the Constitution in particular in the 14th amendment as well. So I don't want to unpack all that although I've written a lot about in the last week, unsurprisingly, but I want to say you know one of the things that came to mind a lot was Henry II and Saint and Thomas Moore, because I mean pardon me and Thomas Beckett. So Beckett was the Archbishop of Canterbury and Henry II said well no one, he's the king. He says well no one rid me of this turbulent priest, and his knights went out and killed the priests. I think that you can do as a leader that are not just speech but that are actually translate almost directly into action, and we need to be able to hold our leaders accountable and I don't think there's anything wrong with. I think I agree with Matt that generally, the politicians should be able to speak even when they have important ideas. The speech adds up to a call to action and translates directly into action. There's an important issue that's raised where whether it's a communicative act, or actually part of a disruptive or insurrectionary act. So I could say a lot more and I've written a lot about it as I've said, Google me on it if you want to, but that's where that's sort of where I draw the line. Victoria I'm going to give you the last word we, I wish we could talk about this for another hour and a half or two hours or three days there's so much to unpack here but Victoria I'm going to let you have the last word. So, in the case of Donald Trump and the comments that Twitter specifically stated as incitement to violence, you know ultimately those are guidelines very clear for that are not allowed on tumblers so for us. I've been lucky. I will use that will actually be used that word to not have Donald Trump on the platform. We are being incredibly vigilant over the last couple of weeks to ensure that that type of in that type of conversation as well as the communities are specifically talking about organizing around state capitals. Those are things that we are very much on watch for the to a large extent tumbler has not had that type of activity on our platform but it is something that we are, we are being mindful of. The last thing I will say is, is just around this, whether it is unintended consequences around any revisions to 230. Often it is, and very and we have, we have examples of this, I think with the one carve out around section 230 now with foster sesta. So pretty one who's just coming into this conversation. I think looking at some of the unintended consequences around foster sesta is really important here. Tumblr has always been a home to marginalized communities, and they are the very first to be deep platforms when you have really strict rules and regulations, coming at platforms, and are often the first to go tumblr and you can also Google this, and I'm not saying anything that isn't out there tumblr had a very tumultuous de flap or essentially all pulled removed adult content changed our policy around adult content. And it had the unintended consequence of hurting some communities that were very, very valuable to our platform and ultimately, that is something that we're, we're working to improve. But I think that there is an overestimation of the power of artificial intelligence and technology and doing and really taking a scalpel using these types of regulations and using a scalpel to remove the things that we don't want, versus a sledgehammer which is unfortunately often the case and that is something that I think is a real danger for speech so I just encourage folks to look at some of these implications around foster sesta and really understand sort of the depth and the concern around marginalized communities and how they will almost certainly be negatively impacted by any unthoughtful legislation should we see it. Thank you I want to thank all of you for joining the conversation I am. I apologize if I didn't get to your question there was so much to deal with here I want to thank Mike I want to think Victoria. I want to thank Matt for really a fabulous conversation. I encourage all of you to take a look at the free speech project on the slate future tense website which collates a range of articles on a range of free speech topics including many of the topics that we have talked here and also check us out our tech law and security program at American University Washington College of Law. Many thanks and I hope we can continue this conversation as we go forward.