 Hello everyone and welcome to At Home with the Electronic Frontier Foundation. I'm Danny O'Brien. I'm the director of strategy here at EFF and today we're looking at a cluster of topics that we've spent really decades looking at here at EFF but have really come into the spotlight in the US at least in the last few weeks. Internet censorship, content moderation and the power of platforms and the infrastructure to transform and curtail free speech with the outgoing president of the United States being removed from social media platforms and Pala, an entire social media platform associated with his supporters being taken off line in its entirety. Many are asking do private companies have the right to make these decisions and if they do have that right are they right to do so? We have three panels on that topic today all packed with great guests and I hope you'll stay with us for all of them. I've described them as setting the present-day scenery of what's going on that's our panel that you'll be seeing in a few seconds. Then after that there's Gilliam York's panel in 45 minutes that shows us where we fit into the history of content moderation so far and then the end will give you all the official EFF endorsed answers to those questions that we've raised that's moderated by my colleague Jenny Jephardt no pressure there Jenny and of course throughout that if you'd like to ask your questions in the various places around the internet that we're accepting questions we'll pick all of those up and present them to the moderators at the final 15 minutes of each panel. We're available on YouTube, Facebook and Twitch for those of you raising the very obvious question of why we talk if when we're talking about the huge power of these platforms we're actually sitting right in the middle of them well first I hope they don't decide to content moderators and secondly as we all realize part of the power of these companies is their ability to attract large audiences and make it easy for people to share information perhaps a little bit too easily in that case so if you do have any suggestions for other platforms or self-hosting solutions that you'd like us to spread the news on just email me at Danny at EFF.org and we can start talking about it in the meantime you can also email us questions and this entire conversation will be available for all to see without any privacy invasions at the internet archive at the end of this discussion well that's enough of the administration let's get into the first panel which we're calling infrastructure scalpels and hammers and today I'm joined with a really amazing set of people who've been thinking about this for a very long time I'm joined by Korean McSherry the legal director here at the EFF who leads our work in intellectual property open access and free speech issues and was the lead author on our recent post on the topics that we're discussing here beyond platforms private censorship parlor and the stack hey Korean we're also joined by Mike Masnick founder and CEO of the copier institute editor of the dirt blog and EFF pioneer award winner who's been writing on the platform of the topic of platform and internet censorship for many years are you doing Mike great thanks for having me thanks for being here we're joined by Tarzan Gillespie senior principal researcher at Microsoft research social media collective that's Microsoft Research the R&D section of Microsoft not Microsoft's a platform provider Tarzan is also associate professor at Cornell University and who's 2018 but custodians of the net platforms month content moderation and the hit decisions which both social media really spelled out the battleground that we're seeing today hey Tarzan hey hey thanks for the invitation great um finally David Teal who is data architect at Stanford internet observatory the observatory published a report last week that I think everybody should read called parlors first 13 million users it's a great touchstone for concrete info on parlor in what has become a very confusing and politicized environment all right okay I managed to bang through all of that in five minutes it leaves us 40 minutes to solve the all the problems of the world so but to keep it off Mike maybe you can quickly summarize just what happened to parlor because I think that's the the top topic when we think about infrastructure moderation uh yeah so it's I wish there was a really short version of it so I'll try and do my best but um I mean obviously parlors been around for for you know a couple years now and they were hosting on amazon web services like lots of different startups today um and as we later learned amazon had been contacting them starting in november to to say that some of the content on parlor was violating amazon web services uh terms of service regarding how they handle content moderation on certain topics um and this went back and forth and then obviously there was the mob at the capital on january 6th uh and two days later amazon told parlor uh that you know they had warned them about content before and now they were giving them until the end of the weekend until they were going to turn turn off the service in part because parlor had not put forth a a real plan to answer the questions that amazon had been raising uh and so that happened the weekend went through parlor disappeared on sunday night on monday morning parlor sued amazon said it was an antitrust violation uh and and asked the court to force amazon to turn them back on the court refused and since then parlor's been down it showed back up a sort of static page showed up saying that they were coming back at some point and it looks like they got some services through epic which is a a service that's sort of famous for hosting the types of content and sites that nobody else will host uh and then also uh with a dd os protection from a russian company that is also famous for sort of working with the sites that nobody else will protect but the site itself is not yet back up right and karen this all happened a few days after donald trump was um most spectacularly taken off twitter um but but i know we and i think everybody here has been trying to differentiate that decision by um a social media platform to remove a single individual and this step of uh a someone deeper kind of in the internet um structure um removing an entire platform with millions of users are they different do companies have the same rights to be able to remove whole uh companies compared to individuals um so yes and yes yes they are different um and and yes companies those still have a right um to decide they don't want to provide service to a website as much as an individual um you know unless they've got some contract um saying otherwise but normally yeah that's that's correct um services up and down the stack have a lot of leeway to decide who they want to support and who they don't want to support but um but i think we should be pretty nervous about this particular decision um whatever you think about parlor what you do when you take down an entire website is you inevitably are taking down all kinds of speech that's probably not objectionable at all and certainly not what anybody's complaining about and that points to the problem the fundamental problem when you have sort of moderation at the infrastructure level is that um this is why we name this panel scalpels and not hammers all you have is a hammer right when a social media company like facebook or twitter takes an individual user down i mean there's reasons to be worried about that too even though no one's heart necessarily bleeds for donald trump who has many many other ways of talking to the world um but nonetheless at least it's an individual user maybe you can target individual content particular things um and often there's some right of appeal maybe it's not an adequate right of appeal but there is some option for the user to influence or complain but when you move um further down the stack um there's not that option right you're going to have to oversensor you have to end up taking down an entire website because that's the nature of the service that you provide and i want to flag just a couple things it wasn't just amazon what also happened is that google and apple took them off the app stores and um and so i think amazon deserves a lot of attention but the app stores also have tremendous power in this space and if those two decide that you're not going to be available you're not going to be available that's it they've made a decision about what speech is going to be okay online and offline and by the way i don't actually find it incredibly persuasive that these companies will say well they just weren't enforcing their terms of service and that's why we took them down i think we all know they took them down because a lot of people were putting a lot of pressure on them to take down the speech after a horrific event happened in the us capital but that's a political decision and we should think about whether we want infrastructure companies making political decisions like that so tilton we've talked a lot already about kind of infrastructure and the stack and the different nature of all of these companies um you've put a lot of thinking about like the map that we're seeing here and it's clear that i think at least within a wider tech community we're beginning to think about treating these these different private companies in different ways do you have like a model of like where all of these organizations are yeah um it's tricky i mean i think that for so long so much of the public debate when it finally did emerge has focused on the social media platforms and so we talk about facebook and twitter and youtube all the time uh and the times that we have dealt with this question about whether cloud computing web hosting some of these deeper into the stack services are engaged in what looks like moderation it's always been controversy moments it's been parlor it's been h and it's been gab it's been wiki leaks um one of the things that i would encourage is as we think about this it we want to make suggestions or at least identified problems that are going to apply to the whole ecosystem of how these systems work right and i think in some ways dividing these into sort of platform and infrastructure and relying on the stack metaphor uh is a useful shorthand but it maybe loses some of the subtlety of what's going on so you know part of what we're reacting to is that if cloud computing is uh in a position where they're not just moving one user they're losing a whole network as karin pointed out that's a much different intervention and in some ways because they are lower in the stack we imagine that that power and as well as the expectations we have for them are quite different than a facebook or a twitter i think we have to remember a couple features this isn't a map but it's sort of a couple dimensions so one is that are they intervening on the level of a user or a network that's the point that karin was making another is what was the promise of how moderation was supposed to happen so is it through a contract is it through terms of service plus guidelines is there an understanding that this is a proxy for a democratic process or a user process or is this a business a business process um i think the main thing that i want to remember oh and the other one is to say that we were seeing some that have to do with like depth right so uh you know web hosting is below a network i also want to think about the ones that go sideways so there are things like payment services and uh you know uh merch companies right so those aren't the ones where if you're kicked off you simply are gone from the internet but the way that speakers earn money build audiences in practice do depend on these right so the idea that a t-shirt company is going to kick someone off for the merch that they share through their youtube channel is related to this right so we have both kind of infrastructural and supplemental the last thing i would say is that probably one of the things we're grappling with is not only a sense that moderation is going down the stack to the infrastructure but that these companies are moving up the stack in the services services they provide right so web hosting is more than just web hosting cloud computing is more than just making a spot on the internet for something they're providing data analytics they are sort of you know tying them into ad networks and that makes them different than simply um i opened up a corner of the internet you can speak for it yeah i think that one of the things we noticed that you know amazon really isn't just providing bare metal here it's it i mean it builds itself as a full stack service so it's a very natural thing for a company a startup like parlor to to use um david the observatory the sense of observatory did a great deep dive into what actually parlor is and for those of us who don't have an actually i think i do have an account there right but um you see i think one of the arguments here is that the parlor was pretty much the locus of activity for organizing the capital riots and also that's pretty much its modus operandi that's what it's there for um was your did your research indicate that or is there more to the the company um so one of the things about parlor uh and for those who haven't used it it most closely resembles twitter kind of with some reddit like functionality for a voting and down voting um it was definitely used for a lot of um you know proclamations of what should happen uh speech that would be taken down on platforms like twitter it wasn't necessarily the best place to actually do detailed organizing uh so one of the things about parlor is you can't actually search for anything uh the way that things come up are either that somebody paid for it to come up uh so you can pay money to have your tweet equivalents appear uh or you know you found it on a hashtag so people would basically instead of having groups that you could organize in people would just spam you know 50 or so hashtags stop the steal was a very big one uh around the election and uh you know people could pick up on things from there but a lot of it was really driven by influencers um you know epic times linwood folks like that uh who a lot of people subscribed to rather than this kind of group chat that um you know happened on some other platforms like the donald or various telegram groups so mike you're um someone who's who spent a lot of time thinking and and explaining why there might be a difference between content moderation on the platforms and this deeper filtering and selection deeper down in the infrastructure but you actually in the end came to the conclusion that this wasn't a problem in the parlor case um maybe you could explain why you went from describing this problem to saying it didn't actually count in this case yeah and so you know when i first heard the you know the the stories of everything that happened with with amazon and with google and apple my my gut reaction was was similar to carin's uh at the top of of this discussion um that it was greatly concerning and and you know particularly because of the um you know the sort of sledgehammer nature of it that you're not you're not just targeting the specifically problematic speech but you're you know completely wiping out an entire site um but then the more i explore this particular example um the less i could find to stand by that um you know i think to some extent it comes down to a question of you know competition and market power when it when it comes to it because i do recognize like if a company doesn't want to associate with certain content it has every right to to say i'm not i'm not going to host you i'm not going to to be a be a part of this um and my larger concern is when when governments get involved but when i started to look more closely at each of these things you know the question is like on the amazon front is you know is that a problem amazon takes it down you know in their lawsuit parlor admitted that they had spoken to six other cloud hosting companies uh in the interim after amazon told them they were they were moving you know amazon does not have a particular monopoly on that on that market um defend depending on your definition of monopoly but there there are a bunch of other companies clearly in that space um the fact that lots of them uh have chosen the have said that they won't work with parlor either you know i think as i wrote in my article tends to reflect back specifically on parlor i recognize that there's there's a little bit of uncomfortableness there because you could argue that there are other sites that we all think probably should exist uh in certain ways that could face that similar sort of you know uh block list approach where everybody says no um but you know i do think that there are alternatives out there and the same actually come is true to some extent when you start to look at the google and uh apple situation and again like my gut reaction is you know i'm a little bit uncomfortable with that um but you know google at least with android allows side loading so there are other ways to get the app if the app does exist uh and you know these apps exist on the web itself so the ability to create web apps is also there and everyone seems to sort of skip over that part you know you can access most of these services on a phone just using the browser the browser still exists you can still log in uh and still access it that way it may not have all the features it may not have all the capabilities that the full native app has um but you know maybe that's the trade off of you know not being able to go through the the apple and google review process so i think you know after thinking through all of that i sort of said you know in this case uh i can understand all of the decisions and there are alternatives whether or not parlish found one of those alternatives yet is is perhaps a different question um so it's you know i'm not like this was a perfect situation and everything worked exactly as it should i still have you know sort of uh grumbling in my stomach about about whether or not this is the way you know uh these decisions should be made um but i couldn't find any specific situation where i said you know what this company should not have done what it did uh and that there were no alternatives out there um that that you know gave companies other options and so i came down on the you know i am not as bothered by this as some people are even though it feels a little uncomfortable somewhere karen is is this a hard line is it like all or nothing you're kind of either like an isp and we have ideas of net neutrality and and that everyone should be able to communicate on them and suddenly there's a chunk and then companies should be able to pick and choose who they have on their service and um this is the number one question right where does that line lie how would we write a law that describes that well it's it's very funny for me to be in a position where i'm more pessimistic than mike masons but or more worried put it that way um and i but you know i remain extremely worried i do think that competition is a problem amazon may not be the only service but it's a tremendously powerful a getting more powerful every minute and so i think one of the things that we have to think about it's not just you know what the situation looks like today but what it's going to look like tomorrow and what precedents we're going to be setting there was a time when there were lots of isps and people had choices that time is gone right and it's a very good thing it seems to me that we have for the most part adopted a norm that we're going to be very very concerned when isps go around shutting people off and that we would be very concerned if an isp shut someone off solely because of their speech i think that most people would have that concern except for maybe copyright holders and that is another meeting but um but we have that norm for isps right and and it's a good thing because right now you don't have a lot of choices we could very well be in that situation with respect to many many web services not too far down the line and another thing that i want to pick up on that that mike hit on is being more concerned about you know when governments get involved governments will get involved if you set the precedent that you're willing to take people offline subject to pressure governments will take advantage of that and they and they already have so for example last fall zoom refused to host a few conferences in part from due to pressure from private parties but also due to fear from governments that if they hosted those conferences they would be potentially associated with um terrorism in some way so you know that was another place where zoom could have made the choice amazon could have made the choice that they would be neutral and they would and right now we're in a position where a lot of norms are being adopted and those norms are going to shape the future of online speech and so that's why this time is so dangerous i think it would be much smarter and much better overall for online speech if all of these infrastructure companies decided we're just not going to participate we're going to be neutral and we're going to stand on that i do i do remember there was a moment when both mike and i were the reply guys in rokhana the congressperson for actually pretty much the silicon valley who was publicly saying that amazon should throw parlor off and that's a fascinating moment because of course he's he's not using any law to put that pressure on but he is using the bully pulpit he has as a as a congressperson um tell them um we've talked a lot about like like which category things fall into and like one of the things that mike mentioned was and and korean was whether there's a competing alternative when we started looking into this i think we took you know this model of the difference between infrastructure and social and platforms pretty seriously but then we came up with so many counter examples that we were still worried about right which is for instance zoom i don't think zoom is was infrastructure a year and a half ago but goodness knows it's an essential part of people's life right now and they've started making decisions like whether uh uh palestinian activists can speak in effectively in california universities do we have to have like a flexible line and who will should make the decision about uh these two categories yeah i mean i think you raise exactly the hardest part of this which is the line is only where you construct it and as the industry is shift and uses shift and the public significance of these services shift that line gets crossed and and you know is uh i don't know becomes a a battleground for how you can urge some services to take a certain set of responsibilities and other sort of services to take precisely the responsibility of being neutral and help hold those both in stasis um i i wonder about i mean i'm i'm convinced by the sort of instinct that drives this discussion which is the further down the more it would be nice to think of these services as conduits to let them act as conduits to let them sort of not pick and choose and and in some ways give them some defense like all of this to me all of this speaks to like the utter vacuum of real discussion of what this ecosystem looks like from the political realm especially i maybe i'm not as afraid of of sort of where government could play a role here if it were a discussion that said okay none of this fits the content versus condor distinction none of this fits even the way 230 was imagining things 20 years ago that doesn't mean throw it away but like at least recognize that the that the arrangement has gotten more complicated and get you know if you want to protect some service as being infrastructure and getting to say we don't you know we're not we can find this distasteful but we have a greater service which is that we're going to make space for speech and that's going to be our thing then you need actual like strong legal protection for that you need the content versus conduit distinction that says well that's what we want you to be and i think in the in the absence of like a real political discussion that says this is a new arrangement that doesn't look like publishing it doesn't look like telecommunications anymore it looks like something else what do we want what kind of protections do we want to give to something that is infrastructural to something in one moment and and a provider a top infrastructure in another moment and you know what kind of like assurances could we give so that a provider that wants to say we don't have to like parlor but we're going to make a home for it understands that that's a position that is protected that i think maybe the thing i would introduce first is in between these categories or in addition to these categories of like platform versus infrastructure i would think about amplifier right so i think we're recognizing that a lot of systems that began as a host or a tube or a space are then encouraged financially and partly because the market is so uh is so is not so diverse to provide enhanced services that could be data analysis i mean zoom wants to create like you know scheduling and hosting events they want to like move into that space which is part of why i imagine they think that they need to start playing this moderating role instead of saying look we're just the better skype like get out of our way and we'll just connect people and we have nothing to do with who's speaking like a classic telephone but the the move in the last few years has been to say even the platforms that were presenting themselves as we just host speech that stance hasn't held up right we have a lot of questions about the kind of power to amplify the power to coordinate the power to bring people together that question we're asking about facebook and instagram and youtube i think there are ways that as we look down the stack if that's our metaphor we have to recognize that there are roles that these services play and that complicates this question of like which line we're comfortable having them live on so david one of the i guess unchallenged assumptions that we're we're assuming here is the reason behind these these the platform right that that i think it's fair to say that the companies were either worried internally or responding to public pressure that the services that they were providing were enabling violent action and the deplatforming removing access to these these individuals from the internet would have a positive effect in reducing their ability to organize i know that the observatory has spent a lot of time studying extremism online and the the nature of how it operates is that always the case does deplatforming particularly deplatforming from a platform from a large service have that effect um it's arguable so it does mean that things fragment somewhat and the people that are doing research on extremism online lose some visibility and have to similarly fragment and start pulling in data from a bunch of different sources so when parlor what when it was announced that the platform would be going down basically all of the researchers studying this space panicked and started hoarding as much data that as they could from the site and even doing our analysis there were gaps and questions that we couldn't really answer and we kind of would have liked to still have the apis to pull additional information from the way that people have been moving now it's people trying to monitor hundreds of different telegram groups you know trying to scrape the entirety of the successor sites to the donald and so forth so i think that the overall deplatforming it has some upsides and downsides when it comes to both i guess continued promotion of extremism as well as research one case where i would say that the deplatforming had a little bit of an awkward effect is that of twilio who pulled the ability to do two-factor authentication basically on the site so we had parlor having issues trying to even keep up with moderation as it was and then all of a sudden the thing that kind of kept it from descending into just spam everywhere you know was this requirement that when you registered you had to have an actual phone number not avoid service anything like that when that got turned off the user creation just spiked massively and they were no longer going to be able to moderate it no matter what uh so that was i think one case where the deplatforming decision probably could have actually accelerated the problem if parlor weren't imminently pulled offline by the you know larger platforms so uh we're gonna open up this particular platform a little bit now if you have any questions to our panelists please throw them in to your intermediary of choice one that's already cropped up is um someone asked do we need formal regulation on what an infrastructural service is or does that tip pose too much risk in terms of creating false regulatory boundaries karen i know that that both in europe and the us we seem to be in this moment where further regulation of intermediaries seems to be on the cards should there be a dividing line here well the reason i worry about trying to um trying to put a dividing line into regulation is that i i fear that any any law that is as precise as we might want will be obsolete within two years um or sooner right it's it's it's it's difficult this why we worry at e f f o about suddenly we call tech mandates um we really worry about enshrining you know particular visions about technology and perceptions about technology into law because they often change over time so i i think to the extent that you look at any kinds of regulations or rules they need to be flexible and adaptable to what is inevitably a rapidly changing environment um so i i i think there's a lot of um efforts right now to regulate and there's no question about that i worry though sometimes that um the efforts to sort of change the rules of the internet um are their their focus like for example on section 230 section 230 has really become a scapegoat for sort of everything everyone hates about the tech giants and um and you know if we're going to regulate in this space we have to understand that every single regulatory choice that we make is going to affect speech and so that always has to be the backdrop for how we're thinking about that because it's simply you know virtually inevitable yeah can i can i jump in on that as well sure uh so i think you know and i've been a part of a variety of discussions where people are talking about you know can we you know create a map of like you know different kinds of infrastructure and and whose edge and whose infrastructure and what are the the layers and and could you you know create a taxonomy and and the thing that everyone sort of discovers very very quickly is that that's kind of impossible right you know partly because some of the stuff that the charlton was saying where you have like you know zoom you know that's an edge provider people interact with it directly which is normally the definition that we have as an edge service but all of us sort of feel instinctually like that feels more like infrastructure in some way right and and and some of the other services you know payment processing uh can can touch across those things or like data analytics all of these things they don't fit into a neat category and as soon as you start to try and categorize them and then say okay well we're going to set rules on this you begin to come very quickly up with you know all of the exceptions and all the reasons why this would be problematic and sure it makes sense like okay well we don't want this to happen but if you ban that particular thing then you're going to wipe out all of this other important stuff too it's very difficult to write rules that actually make sense and so i'm i'm torn on this issue as well because i i do think it is important that if there is going to be any regulation about content moderation as a whole it's important for regulators and policymakers to recognize that there there are these different aspects to all of these different internet services and treating them all the same doesn't make sense but i'm also really really worried about any attempt to to sort of you know create a taxonomy and classify them and say well there are different rules for different things especially when it's it's constantly changing and adjusting and different services touch on different different parts of the stack at different times and so it's useful to think about infrastructure and the stack and the and edge providers but in terms of actually then making rules based on that i think is probably not the right path to go on so i'm struck that a lot of this discussion has been very firmly about household names right the these big companies that we all know about and also in the american context and i think there's a combination of the fact that america still gets a lot of play around the world but also all you know these companies that we're talking about are all exclusively american corporations malcom st7 asked could the panelists comment on emerging technologies blockchain peer-to-peer distributed file sharing that might prevent this kind of censorship and i guess i'd like to widen that out a bit and also talk about is this a is this a us dominated discussion and how can we encourage other non-us platforms to to compete this is an open round i'll i'll start i don't mean to hug the microphone but i'll start you know i because i've written a lot about the idea of like you know what i referred to as protocols not platforms as a as an approach to thinking about this stuff which gets on those alternative models and this idea of you know uh you know moving stuff away from the control of big centralized platforms making it more distributed giving you know pushing more control out to out to the end and i think that's that's a worthwhile thing and i am still a huge supporter of that and and think that that is an interesting direction to go in but i i at the same time recognize that it doesn't it doesn't solve all these issues because these are not necessarily issues that are solvable right uh you know if people can post content anywhere um you have a few different things that come up one is that people will find new intermediaries to to target to try and take down stuff i mean we've seen this very much obviously you guys at EFF have seen this in the copyright context where you know the original targeting was you know just on the services that we're hosting this content uh and then to the the you know uh intermediaries that we're sort of transmitting it then to the advertisers then to the domain registrar you know people who want to take down certain content will figure out there will be a choke point somewhere and they will go as far as they can to find that choke point so just saying like we're going to distribute this doesn't necessarily solve the issue of they're being choke points people will will figure out where they are and will attack them and then the second thing is that you still have the questions about content moderation and how are they going to be handled and there are some unique and interesting things and i think unique and interesting approaches that can come about from making a more distributed service um but i i think that you know it doesn't magically solve itself it's something that we'll have to see a lot of experimentation with uh okay and two uh asks is what is the purpose of this infrastructure regulation i think i think i can you mean sort of the the actually imposed imposition of rules on providers is it is it punishment for past bad content is it perspective to prevent future bad content why why does amazon have a rule that says you must moderate in this way is it liability protection for the infrastructure and does their motivation change how we think about whether it was a right or a wrong decision i'll throw in a comment there and again saying i don't have any special access to how microsoft thinks about this i'd be curious because the this is being thought about right uh at places like microsoft for azure amazon for aws uh i i'm certain that i agree with karen that this is largely sort of liability protection and most of the community guidelines stuff that the social media platforms wrote was uh to ensure that they had the flexibility to get rid of the stuff that was damaging and but as little responsibility as possible and over the last five or ten years we've seen a a lot of pressure from you know activists users journalists academics policymakers to say you know there there's got to be more but then the question is sort of what more um i think it is interesting to factor in a piece of this so if so if we approach it with a sort of like cold political economist approach we sort of say you know of course they're going to act in their self-interest and what we're trying to do is create a kind of like landscape of allowed practices and regulations such that even when they do what is in their self-interest it leads towards something that is beneficial and that's the you know internal challenge for media policy i do think one piece that we want to factor in is that they're for some of these larger companies the ones that we're talking about there are also complaints coming from within the companies right and and when you when you say well i don't want company x to have the right to say i'm i'm going to refuse this you know particular network particular user the flip of the extreme argument against that is saying i'm going to force this company to host it no matter how they feel about it right and these are not you know one guy setting up a you know a server because of their ideology these are like corporations full of people and i think what we've seen in the last couple of years is the people saying i'm not sure why i'd want to tolerate being the service that provides the service for that thing that is not only reprehensible to my own politics but like seems downright dangerous and and that's not to say oh that that wins the day but that's an interesting piece of this that i think we need to not entirely overlook well we've we've had a stack of extra questions but as the first panel i feel a terrible burden of responsibility to end on time so i'm going to thank everybody for opening this pandora's box we've raised more questions than we had answers but i think that's exactly the nature of the game right now thank you karen mike david pelton that was a fantastic discussion okay well coming up next we have jillian york who'll be talking about the um the wider historical context it seems strange to be in a position where new media and the internet isn't the exciting thing anymore it's actually something with a long history and jillian i know that you and i have spent much of that time trying to deal with these these questions of content moderation and platforms and in the infrastructure but not just in the us but around the world so i'm going to hand over to you and your excellent panel to discuss the the global history of content moderation oh i think you're muted thank you danie yeah i'm really excited about this panel we've got a great lineup here and you know rolling around the various names for the panel i think we kind of landed on back to the future and for me this january the 10th anniversary of the arab uprisings as well as the start of the occupying movement and a number of other things has really brought back a lot of memories and also have in terms of the internet has kind of given me this framework of everything old is new again a lot of the things that we're seeing happening now are happening again so let me welcome my guests is everybody here we've got got several wonderful panelists today i can't see them ah here we go okay we've got charlotte wilner charlotte is the director of the trust and safety professionals association adal and kai also with trust and safety professionals association nada akal with darcy and shanley uh formerly of discord and shan i'm not sure what your current affiliation is i'm gonna leave that to you when we get to the first question so i'm gonna just start off with a lightning round question for each of you um i want to ask you what is something that you've seen happening over the past few weeks let's say in 2021 um in the news that reminds you of the old days something that we've already debated come back around again let's start with uh let's start with you charlotte well where to begin um something i have seen in the comment section repeatedly on a lot of uh discussions is this idea that when people go to alternative platforms to discuss and they could then cause problems and shenanigans then uh occur maybe we could just ask them to use their real names and they would be too afraid to say anything bad under their real name that would surely solve the problem yeah that's what i call the the white man's gambit um there you go well of course adal and what about you yeah um i think the thing that i've noticed is um people departing platforms and saying like oh no i'm being censored i'm going somewhere else where there will be no rules around you know the kind of speech that i can enact but then slowly the services realize well there's some sketchy stuff going on that we actually need to moderate so i feel like we keep coming back to this you know idea that you know you can create kind of some kind of social network or social kind of platform and not have any rules in place um so i found that really interesting great thank you nada what about you um definitely the debates coming up about the real real name policy which is never ending and maybe as old as the internet but for me and i'm cheating because that's not just the last week it's been really interesting for me watching the us grapple with the thought of what happens when an elected person does something bad and how do we deal with it online because it's something that we've seen in many other parts of the world and that was never like that was never big on the agenda for social media platform and now all of a sudden it matters and that's it's been fun or funny yeah i definitely want to come back to that um but before we do that shon what about you uh i mean i i think the most interesting thing has just been the number of companies that suddenly woke up and they were like wait we have content moderation problems too and i think it's like like a decade ago it was like hey everyone that deals with user generated content is going to have content moderation like questions and problems right and now i guess people are like a great example right was that after january 6 like peloton chose to remove the stop the steel hashtag right and i'm sure no one at peloton was like yes like we're coming here to make content moderation decisions right they're selling exercise bikes but that is the nature of the world that we live in yep no absolutely oh that's a great answer okay let's see oh i've got so many questions um okay i'm gonna turn back around to charlotte because i know you were back at facebook in the very early days take us back 10 years ago 2011 that were january 2011 the beginning of the egyptian uprising the revolution there um what were the biggest challenges that you were seeing at the time in terms of uh policy or trust and safety absolutely um there were a number of challenges we were facing um certainly from an operational perspective which is my my background really is operations i just happened to be married to the guy who wrote the policy so we you know i know a bit about that um and obviously we had to enforce the policy and give feedback when it was unenforceable and a number of the challenges we faced at that time one was of course um just a a lack of understanding of the dynamics on the ground um and that was something uh you know we still did not have um a very international employee base at that time um we did not uh we didn't have a lot of egyptian arabic speakers on my team in particular we did i don't think at that uh you know maybe at that time we did have a we we had a few but it wasn't like i could sit down in menlo park and say all right now i'm going to figure out what's happening here and that was true of the policy team as well so you sort of had to play this telephone game figuring out like what's what's being said and how is it being said and um you know so much of the language of revolution is contextual right and we didn't have that cultural context as easily as we would have liked and we didn't even have the linguistic context so that was one problem um another problem uh we actually faced and i'm going to go back to this real name policy um i remember uh when we were dealing with um some some of the unrest unrest is that what you call it some of the problems in iran um there was uh a great storage of registrations um with folks using pseudonyms using you know very culturally specific pseudonyms um and my team at the time was the team tasked with like okay well you have to enforce the real name policy and so we had all of these you know iranian regime people reporting these these activists who were like well they're not using real names and it's like well yeah they're not using real names and so actually my job is to take all their accounts away and that didn't seem like a good development um and every time something like this happens in the world every platform who finds themselves in the middle of it realizes in that moment like oh okay it's happening now and we have to pick apart what we thought this was going to do and put it back together in a way that's going to actually maybe better promote the mission of what we're trying to do um but it's in real time and their real human consequences and that's hard i see that real names is coming up a lot so i'm gonna i'm gonna put a pin in that and come back to it later in the session but adeline i want to come to you um because you have worked i think of all of you and maybe i'm wrong i know nada you've got two companies in your back pocket as well um but adeline you've been at pinterest twitter and google you've seen a lot um tell me what are some of the things that you've seen over the years that that you would maybe do differently yeah i mean i feel a little bit like a like an imposter among this group because i started out doing a lot of advertising policy and so you know at the beginning i'm curious that a lot of time you know looking at ad work google ad words and monetization rather than some of the much much kind of um broader issues like real name policies um but i'll just kind of um share an example of a policy that i think you know has really shaped um further policies at other at other companies um and it's the the policy that google had um and i actually am not sure how it's evolved um since my time there but on hate speech and um the reason why it's a really interesting policy is because it took a very kind of legal view of what hate speech was and i think because google had such a clearly articulated policy became this kind of like canonical thing that started getting replicated at other companies um and it's not that um you know i think you know the company got it wrong or anything like that but i think it's interesting because um there was very much a culture at that time of wanting to enable as much speech as possible and you know the idea that you know um counter speech will do its thing um and so taking the the approach that they did um enable people to have a lot of speech online but i think um without the perspective of um or rather like the trade-off that was kind of being made was that we weren't taking action on things that may have may have been hateful but not legally considered as hate speech and i think we're seeing some of the impact of that today online not you know across all of these um different services and i think we can come back and and talk a little bit more about that but there is kind of a distinction between what is legally considered hate speech and then all the other speech that we're trying to you know have some kind of position on yeah no it's a really tough problem and i see i see companies struggling with this all the time so let's definitely come back around to that as well um nada i want to turn to you um i know you've been engaged a lot with civil society over the past few years we've run into each other at conferences um and so one of the questions that i wanted to ask you is what do you see as different today about how companies engage with civil society i know you've kind of been on both sides of this yeah i think like companies are definitely now aware that they have to engage with civil society that there's no way around it but and they're definitely engaging at the more granular level like i worked um when i worked for social media companies so youtube and facebook i was always representing the middle east and it was we went from having like one arabic person for the middle east whatever that is to slowly convincing them that we needed to have people who speak pashto and farsi and i think i've seen the same thing happening uh in their engagement with civil society but what scares me a lot is that they're also very much setting the agenda so what's happening now is that moderation is being discussed as a problem but what is almost never being discussed is the business model that is creating the problems of moderation so a business model that is monetizing eyeballs for example and a lot of a lot of the issues are kind of presented as like natural occurrences and that is very like it's a bit concerning in terms of how the the discourse is being shaped and i'm not saying that people are just accepting it there's definitely a lot of pushback but um this is like where we're at right now like we need to go back to understanding why this problem exists not just that they exist and they're being accepted absolutely yeah i know i think that there's there's a lot that we can learn from civil society but then of course as we're seeing now there's a lot that civil society can learn from folks who've been at the companies as well um shan let me turn to you i know that you've got a legal background um and i'm curious what you see as kind of the the needs today in policy and content moderation trust and safety work um what kind of balance in terms of human backgrounds that we should be seeing oh man uh i mean i i think the answer right like at at the end of the day i think what everyone wants to do is you know make the right decision and obviously the immediate next question is what is the right decision and and i think like no one has a particularly good answer for that and but i think the closest thing that you can come to doing that is having as much you know history and context and understanding as possible right and i think you know one of the things that i will say sort of leading back to you know content moderation a decade ago is that and i think to adeline's point right like sometimes it was like the lawyers made the decisions and like that was how it was going to be and and that was it right and i think that one of the things that we've learned in hopefully learned i guess in the decade since right is that well actually maybe right like there are other concerns here right there are human rights concerns right like there are civil rights concerns there are a whole bunch of things that we should be doing a better job of understanding in order to make i think the quote quote right decision right and so i think a lot of that has been sort of you know bringing on more perspectives more diverse perspectives and and certainly i think to now this point right like bringing on more people that actually know the place that you are making the decision on right like again silicon valley when it first started was very much a silicon valley centric and certainly as the internet expanded it stayed kind of silicon valley centric and silicon valley made a lot of decisions for the rest of the world and i think now we're starting to see that maybe that is not the right idea right like maybe actually bringing in international perspectives is is a great thing having that cultural understanding that charlotte was talking about right makes a huge difference on whether you're making a good decision or a right decision or or even just like i think at the bare minimum a decision that does not harm people that's a great point and i thank you thank you for also just like summarizing my book in a nutshell right there comes out march 23rd but no i want to actually take that as a point to pivot back to the conversation about real or authentic names so facebook remains one of the only major platforms to have a policy that enforces the use of real names of course there are others out there linkedin is one and i know that that does have some impact although most people are happy to use their legal name on linkedin so the way that i frame this question i'm gonna kind of just read it out but i'd love to get a little discussion going here i think that you know there's a lot of this of talk about real id systems right now and i keep seeing the white man's gambit as i call it people proposing uh yes let's all have everybody sign up with their id cards and then the internet would just be civil um and yet you know i don't think that there's really that much evidence that make that that real names make people more civil and we've seen many of the capital hill rioters the the hindu nationalist violence in india etc a lot of those people are using these platforms with their real names and so do real name systems work in any way what are they good for and is there a better way to promote or maintain civility i can just be what i hope is an authoritative voice on this and say as someone who was with facebook almost the very beginning of the real name policy and who professionally enforced the real name policy for several years with her team no it doesn't work you know um there were there were like maybe 18 months in the history of facebook where i think actually real name policy did work and at this point it's impossible to disambiguate how much of that was like societal expectations how much of that was actually a more restricted user base like there's a lot of questions there but fundamentally what we have certainly seen in the last 10 years is that if someone has something to say and they're determined to say it they're gonna say it under their real name and there's a variety of reasons for that i am actually a little more agnostic about the way to solve that because i think fundamentally someone's going to say it regardless but i did want to jump in and be like hey listen if if appeals to authority are or anything here's an authority saying no it doesn't work glad to hear it does anyone else want to weigh in on that one yeah i mean right so i worked at discord for a while discord obviously has a pseudonymous policy right you don't have to sign up with anything but your email address and then you choose whatever username you want and and i think like and i think a lot of commentators know this and certainly anyone that works in the space know this right like the ability to have that anonymous or pseudonymous speech is really important especially you know time and time again i think we've all said this right especially for minorities right if you are a you know if you're a teenager in alabama you are probably not very comfortable discussing sexuality if you are right like lgbt but you may be able to do so and find resources and find understanding right like online in a pseudonymous space or an anonymous space and i think right like it is that speech that has harmed the most by sort of you know advocates of this real name policy that speech goes away i think you know to charlotte's point like yes like the people if we if we just take a not very long look back at you know the events of a month ago we see that a lot of those people were very uh loud and proud on what they stood for and what they were doing and clearly like you know they were posting on social media as themselves being like i'm storming the capital real name policy would not have changed that at all right but the real name policy really does harm minorities and and those that are not in those positions of power trying to you know make their voice heard or even just communicate with other people i'll share like this oh sorry like the more like snoozy maybe less um exciting detail which is when you have a real names policy and you think about operationally implementing it the amount of work that goes into you know looking at the id do you store the data what do you do after stirring the data what if someone changes their id like all of that is kind of a logistical nightmare which is not to say like you you know you can't effectively implement um that kind of system but i think it's not just the policy but all the work that these trust and safety professionals have to figure out along the way in order to make that a feasible policy um that's also part of the the conversation and part of the life cycle of thinking about you know going from a real names policy to actually making it effective yeah absolutely and i'm just i'm just seeing some notes um from my colleagues that were getting some questions from the audience so i'm just going to try to clarify these real quick lightning round on that um so the first one is just the history of a real names policy facebook came online um to the world i think in 2006 or seven after it had been opened to the universities in the us and then abroad and it had a real name policy which required you to use basically your legal name um and this was not uh i you know you didn't have to submit it to get on the site but if you were challenged about your name then you would have to submit your legal id and in the beginning it was actually through email which was a very insecure way of doing this later on they developed a method of uploading it using ssl encryption which is much safer but not safe for everybody um so that's the history of that and then in 2015 they changed that policy to mean authentic names which is kind of the name you're known by and this happened because of an incident where a number of drag performers in the san francisco bay area um as well as a bunch of allies around the world protested against the policy uh because they felt that their authentic names which you know the names that they performed under were not allowed to be on the platform um and then the second bit was i keep seeing white man's gambit and people are wondering what i mean by that um i'd watch the queens oh yeah yeah please jump in uh just yeah i just also wanted to add that um the real name policy is what made facebook work for the stock market in the sense that and this is very important in the sense that people so what the only real contribution to social media that facebook had at the beginning was that they created contacts contacts collapse hard to say and um and this happened in a very weird way because people first were okay to share because they were only showing with their with their very well-curated campus in iverding universities and this created a certain social contract that fell apart when all of a sudden facebook was open to everybody but the reason i believe i'm pretty sure like it's well documented that facebook wants people to use their real name it's because that means that advertising clicks a lot more valuable and that's important it's not a matter of safety at all i think but yeah and as fc all said it doesn't really prevent people from saying bad things or hurting others so yeah yeah and i that's what i heard for most of the people i spoke to as well and i just want to i want to answer the one other audience question which is i keep saying white man's gambit it's a joke that i made on twitter because i just finished watching the queen's gambit and i thought that i was being kind of funny but really what i mean there is that um all of the people that i keep seeing proposing this idea that we should have to submit our identification to use these platforms are usually white male pundits um they're not you know not your average joe let's say um but these are usually journalists professionals um on twitter who are proposing this idea and the reason that i call it that is because i think that white men as well as you know a lot of the rest of us white women um are majority people in whatever country um us folks don't have to worry about the risks of using our real names in the same way that lgbtq youth do or domestic violence victims or dissidents in authoritarian government uh countries etc etc um and so i think that that's really why this is such an important debate um and then i'm going to throw this question to the rest of you jason says people are wondering if real names would help in the case of harassment so that the platforms kind of know who to go after what do you think about that kind of argument is there is there any basis to that uh i mean i i don't think it necessarily right like you as a platform you're generally not going after a person in the sense that for example law enforcement might be going after someone right like i think most of the time as a platform you're like i would like to stop the harassment one of the fastest and easiest ways of stopping the harassment is to you know ban the person who is doing the harassment and so i don't think that there is a lot of cases where i'm like oh man i desperately needed to know who this person was in order to at least stop the immediate harassment i do think that actually as you know to a victim who might want to look for civil you know legal uh ways of addressing that yes that might help but i don't know that it necessarily would change anything from the platform's point of view uh as much as it would help sort of i guess society actually hold that person accountable yeah that's a great answer i was going to just add with the harassment case often knowing the identity is more important for the person being harassed than for the person doing the harassment um just because again as i've said earlier so much of this is actually contextual um it it is much more helpful to the to like the operations person on the on the other end of actually assessing these cases to understand like who is being harassed and why do they consider this harassment rather than like hey someone said a mean thing like yeah a lot of people say mean things we can punish all of those guys um but often when we're asking for verification it would be in the case to to verify this the circumstances of the situation excellent thank you so much okay i want to turn away from real names for a minute and go to automation um so i spoke to Dave Wilner that's charlotte's husband last year um when i was doing work for my book who shared with me his belief and i hope that i'm quoting him correctly here um his idea that automation is really only good for certain types of things in content moderation that is things that can be classified in a binary fashion so for example nudity it's really easy to say okay this is a picture of a nude woman um we can put this in this bucket whereas automation is less good at things that require nuance um i'm curious what you all think about the uptick in the use of automation particularly during the pandemic and what that means for the future of content moderation um and anyone jump in whoever wants to go first i'll go um yeah like it's definitely only good for binary decisions and very few decisions are binary especially everything related to speech and even before the pandemic companies have made really big promises about tackling hate speech with automations and i i don't see it happening i've struggled with it so much and even in english where most of the researching that exists it's really hard um what's also very important for automation is that um even a very very good automation is going to have an error rate and even a very small error rate for a very big community of say two billion people will affect a lot of people and will usually affect targeted people who are dismissed as edge cases so what this means is that you might have communities of millions of people who are an edge case of an automation and that will again and always be the marginalized people of their own community wherever there is so that's overall scary although yeah i can kind of touch on on this as an obsequent um which is you know the the perspective that like automation is great if you're trying to do a lot of things very very quickly um but um you know your automation is only going to be as smart as whatever models you use to train it um and there are obviously like limitations to that given individuals who might be training some kind of um automation model may have deep set biases right that they might not be conscious about um and they might have a certain perspective that skewing the automation i think there's plenty being being um discussed about that um but i think looking at automation as a solution for decision making as flawed whereas we should be looking at automation as a probability exercise and um using automation to assist in trying to narrow down the probability of a piece of content being violated or not and then having some kind of um you know human driven decision making process to take that data and and then convert that into a decision um but automation is not you know a silver bullet and plenty plenty of much much smarter people have kind of like talked about this um in various papers and online um so i won't kind of go into too much detail now that's great thank you does anyone else want to jump in there or shall i move on i was just going to echo what adeline said which is in the past my teams have had um a lot of success using like a front line set of automations that surface things that might be suspicious but it can't determine i yes this is suspicious and here's what we're going to do about it like that the humans have to do that second step um and the example i love to use on this is you know if for example you are looking at a bunch of um like ices screenshots right like of different propaganda videos they put out you might think ah these are going to catch a lot of people who like ices but in fact what you might find is it catches a lot of people who do not like ices but who may be saying things about muslims in general that also violate your terms but for other reasons right so you discover a lot when the robots come in uh but the robots still need a lot of help yeah yeah it sounds like automation really is not going to be up to snuff for some time now especially when we're dealing with nuanced content that really worries me especially um you know in the global context where we're dealing with a lot of different um non-state and quasi-state actors that yes are problematic in in their behavior and yet are not necessarily violating the rules um speaking of uh we've got a question coming in and i think that folks on the panel will have a lot to say about this historically speaking if not um to this precise question so the question is that today apparently india the country indicated uh i'm sorry i had to clarify that because we have a colleague called india um india today indicated that it could make twitter block some accounts in connection with the farm protests that are ongoing there um would anyone want to share experiences with government censorship and requests um and and kind of the moral and business dilemmas that that arise there uh sure um i mean i think this is the this is the fun nature of the job right which is that you have a platform that is used by hundreds of millions or billions of people and obviously those people are not even you know remotely going to be from one country or one region and there are a lot of pretty unhappy authoritarian governments out there that would like to exert any and all manner of control over their people um and i think it really comes to a head when some of those authoritarian governments decide that whatever content you have on your platform or whoever is you know using your platform to do whatever you know there's something that they don't like uh i think a probably a popular example that i suspect we have all dealt with is russia uh that has passed a number of data localization laws and you know is in the news again as of you know today right for other anti-democratic stuff um right russia has a censorship agency that is very happy to send out letters that say something to the effect of take this down or we will ban you from the country um get hub i think very famously actually was banned by russia in 20 gosh 2012 or 2013 um over a piece of content that someone posted and it's a i mean it's a very hard decision not i actually i think what a lot of people probably suspect for like profit reasons or anything like that right but like what it comes down to is like a country identifies a singular piece of content or a community or something like that and you have to make the decision of whether that piece of content is worth sort of a standing up for be like you know what what that content actually is see like what you communicate to the user and then be like if you decide to leave it up right like are you now denying access to everyone else that uses you know your platform in in that country right and so it really does become this pretty hard question of like what your company values are and whether you know this is a sort of needs of the many needs of the few situation and and to what degree also of course that you are setting precedent for them to come in afterwards and say oh well you bowed to us last time gotta do it again and then right like that becomes a never-ending stream of that so um you know a lot of uh i think every company now has adopted some form of you know geo blocking to at least like address some of this stuff but every time it happens it's always this like well does it violate this law does it violate that law do does it violate our values and like what are we defending here and it's a like i think on any one of these platforms it's literally a daily occurrence and let's get into geo blocking just for a quick second because i'm guessing that we've got some folks in the audience who are not familiar with this my recollection is that the history of this came about with youtube and thailand and or turkey circa 2007 does that sound right to everyone um this is like from my conversations with nicole wong but i could be wrong um but does anyone want to give the quick history of like how that changed the shape of internet censorship when it comes to governments and platforms if not i can just say you're probably actually the most qualified to give that full history all right i can do it i can do it like the quick second and then i will let uh turn it to you to give some flavor um around specific instances so my recollection is that circa like i said 2007 ish um either thailand or turkey or both at the time were pressuring youtube to take down large swaths of content in the turkish context i remember very specifically that these were like football hooligan videos that were mocking erdogan or adatirk um both of which were illegal considered illegal speech in the country and so the option that youtube had there was really just get blocked um now geoblocking had existed and had been like kind of put into place after the lycra yahoo decision in 2000 which was about the sale of nazi memorabilia in europe my gosh i'm just pulling this all from the back of my brain right now um but it wasn't really common until that that particular era of again 2006 ish um and so what happened was that youtube started remove it or not removing but blocking geoblocking individual pieces of content in specific countries um at the request of governments which meant that a person in a country couldn't see it if their government had had requested that they remove it but everyone else in the world can and this was kind of considered the better solution but i guess the question that i would pose to you and feel free to bring in examples from from your backgrounds on this um is this really a better solution or does this just create a more fragmented internet for the world i think you really have to weigh what what you mean by you know a connected internet right and two chance chance point about like well in a lot of these cases you either block it in that country or the whole site goes down for everybody in that country okay like those are your choices and not the those are your choices and you're this giant company of billions of people and you know thousands of engineers who can handle this in a lot of these cases it's uh you know maybe a 200 person company and they have one engineer who kind of focuses on this in like half of her time right like okay in that situation which decision is going to make for a more open internet and it's it's not like a super clear trade-off but i think actually if i were to do it again i would still say yeah it's probably blocking this individual piece of content versus risking the shutdown yeah i mean dissenters oh yeah go ahead i was just just to chime in like briefly i mean i think it is the unfortunate sort of it is an unfortunate Balkanization of the internet right like i i think but it is also a very i think like politically real situation right like i think that you know i love John Perry Barla's you know manifesto for like the internet and like you know the the laws of man have no dominion here and all of that stuff but the reality of it right is that when you're a company like there are you know you you have to comply with the law and obviously there are you know there are sort of things that you can do if you're not in that country etc etc right like there's a lot of jurisdictional stuff but like the fundamental nature of it is that yeah if the country has the ability to just flip a switch and be like you're gone here right like they have a power over you that you need to you know that you need to deal with and i think geo blocking has sort of been the accepted understanding way of doing that because the alternative is you know is a world where you are just not available in certain areas and i think as time goes on that becomes more and more of the case and maybe it's like like that may end up leading to a further baptization of the internet right where as an example right like china has its own version of twitter facebook and whatever right and every country starts looking like that and like unclear if that is actually better for the internet does anyone think that there's a line though at which a line that companies should not cross when it comes to that sort of thing like for me the example that i can think of is in 2017 when snap and medium and i think a couple other platforms blocked journalistic content coming from katha in saudi arabia like that to me was a step too far um are there other examples like that that you can think of from your histories it's okay if not i know that i know that 2021 brain drain is real i feel like plenty i can think of but not a lot i'm necessarily gonna talk about probably fair and the a's are also real yeah i think it also matters how how geoblocking happens like what is the message that the user sees when they go to that link that matters a lot because and how it's reported but kind of transparency reports happen after that um i think that plays in it too and this is where i would draw a line because people shouldn't just think that the the website is down or it doesn't work in their country for no reason um and and from a kind of again like the snoozy operational point of view um you know sort of thinking through like how exactly is the geoblock implemented which signals are you using in order to implement the geoblock and can people reasonably figure out a way to still view that content even though the geoblock is implemented in a certain way right um and some very smart people who are trying to get a content that has been geoblocked in a certain technical way have been able to find that that content so um while still allowing the online service to comply with the laws of that particular country so i think that that's also an important aspect to consider yeah no i think that's great i mean i'm always reminded that twitter i think that their their geoblock is actually not ip-based but it's based on the setting maybe i'm just like spilling the beans here but like if you change your twitter setting you can see the content that's geoblocked in your country which i don't do because i'm based in germany and actually i don't really want to see the nazis um but that that brings me around to transparency and i'm going to do another little lightning round um i'm putting transparency in your heads but this does not have to be your answer um i want you to each give me two things and i'm going to start with shawn this time and work my way back around two things that you think companies should do right now to change their practices around um either policy making transparency or content moderation wow okay um yeah i mean i think the the two things i mean transparency actually legitimately i i think there's a reason i think that all of us who have worked in this space push for this thing um and i think it is one of the most important things right like i this is sort of i guess my personal jumpy but it's like right like because also i'm a lawyer like you have trust in part in the judicial system because you're able to see sort of the ins and the outs and the trial and the evidence and everything obviously with some you know distinctions but right like by and large you get to see why these decisions are made how they happen and and there's this right due process and i think that that is the main trust that is missing from sort of the content moderation space is like you know you report something and maybe something happens and maybe something doesn't or you get banned and you're just like literally what happened like why did this happen and you have no idea right and i think that leads to a lot of frustration because you have no idea what happened and i think the more you can say look like this is what happened this is why it happened like maybe don't do it again i think the more trust there is in that system the more public it can be right the more trust there is so a hundred percent that um the the other thing that i would say and i think this is also very sort of common is like right like making decisions with context like right like we're all at scaling companies we all understand like making decisions for you know a couple billion people not the easiest thing in the world but it is really right like if if you're making decisions based on oh is this like nudity or is this not nudity right like that is a very sort of again a binary question but a lot of the problems that we see in the online decision space are really like hey this one person and this other person have a dispute right what is the nature of this dispute like what is the background of this dispute what happened what are the actual actions that again sort of in reference to the judicial system right it's like you want to understand why the thing happened before you can render a decision on whether it was correct or not before you can render a decision on who should be punished or not and i think the more you try and boil these decisions down to like oh well you know the policy says you can't do this like that's it and without understanding like why someone might have done that i think the worst decisions you have so excellent nada what about you um maybe a different thing but like definitely address the working conditions of the outsourced workers who are doing most of the work and who are carrying most of the context and it's um i don't want to speak for other people but speaking to some of like having spoken to some of them this year uh it has been particularly hard working from home and the working conditions are absurd uh one thing that should be done right now is addressing the fact that the business model is directly at cost for a lot of the content they're seeing it's not it's not just like big questions about human nature it's not just big questions about um oh yeah people are bad society is complicated someone needs to do the dirty work there is a business model that makes that creates more disturbing things for people to to review and that's completely connected and shouldn't be disconnected excellent thank you uh adaline you yeah i think um um trying to mainstream these concepts of content moderation or trust and safety and the product development process is really kind of crucial to me um and i think um you know companies and and teams are starting to get around to that but i think historically if you you know if you asked me like 10 years ago whether or not i felt like my work was prioritized in relation to the overall product development kind of road map i would have said no right and and there are plenty of reasons right like it's not anyone's being malicious but there is their trade-offs to be made when you're running a business right and i think today like we're inching towards being much more cognizant of the of the of the debt that you accrue when you do not resource your trust and safety teams and i think product managers and engineers are starting to view this work as very integral to the overall health of you know the the online service so i hope that keeps moving in that direction and that everyone is kind of on the hook for these safety issues and it's not just kind of dunked on a separate team because they happen to have you know the name of trust and safety or the name of content moderation or policy that is fantastic um and charlotte and i'm going to build right on that so um i i thought i had three but i'm going to smoosh it into two um the first is i would say if you are um you know a founder or you are an early stage do the work now talking to people who do produce transparency reports to understand what is it you're going to need to report on and build that into your infrastructure early because one of the big challenges with transparency is a lot of companies and a lot of people like us we want to be more transparent and literally the data doesn't exist or it exists in a place that you can't pull it from or like there's there's all these structural challenges that then take a lot longer to undo than if we've just done it right the first time so that's number one number two is if you are in a company um where you do have a trust and safety team or you know a sort of risk mitigation team as adeline was saying um there's a lot of talk these days about sort of how those workers are treated how those workers are compensated and all of those things are very good those workers should be paid very good wages those workers should have very good benefits the thing you also have to keep in mind is that those workers should be listened to because your frontliners are seeing all of these problems as they unfold and can help you avoid a lot of this pain upstream um and you know for all like I think we should pay everybody a zillion dollars and get them free massage and you know the aromatherapy definitely we should do that but no aromatherapy in the world is going to solve this problem it's a hard job you have to listen to your trust and safety professionals that is fantastic um we're running out of time so I want to give each of you like 10 seconds to shout out something about the work you're doing where people can find you whatever it is that you want the world to know and then I will wrap it up after that so Charlotte go ahead we launched a job board this week so if you go to tspa.info I think it's slash job board you just go up to the resources uh check it out you too can work in trust and safety awesome thank you Adalyn um I am independently consulting right now um I'm not necessarily looking for people to work with but um if you find me on LinkedIn I'm happy to have a conversation anytime about trust and safety awesome Nada we're about to launch a prototype for a social media website where you get to host and own your own data and still communicate with people um yeah and no ads never and what's it called Darcy.is I love it and it says it's social done proper on the website I just I love the framing and Sean last but not least uh I am also I guess consulting right now um feel free to reach out but um mostly I just want to shout out to all of the work that the trust and safety teams are doing out there right day in day out it is a hard crazy job and I think and you know a shout out to the EFF for putting something like this on I think it is stuff like this that really like teaches people like that it is a hard and complex job that it is not a easy thing that you can just like oh yes like I'm just making decisions like based on whims right there's a lot of crazy complex stuff that goes into this from like geopolitics to history to philosophy to like psychology right so um thank you all for all of the work that you're doing to make the internet sort of a bearable place from day to day well thank you all for bringing so much nuance to this I'm going to follow up with all of you because I would love to have you come chat with some of us uh offscreen sometime I think that would be really rad um so thank you to all of my guests today I'm going to kick you off to let the other folks back on apparently um the the platform doesn't allow too many hosts um and I'm going to introduce our next group of speakers our next panel is tying it all together how EFF thinks about all this um this panel will be moderated by Jenny Gebhart um my colleague and manager and we're going to have uh David Green Catherine Trenacosta and Christoph Schman on this panel so Jenny over to you thank you so much Jillian and thank you all of um everyone for joining us and for tuning into our final panel of the day starting now uh in this last session like Jillian said we're going to try to tie together these issues we've discussed so far and really go deeper into how EFF thinks about content moderation and platform censorship not just in the US but on a global scale uh like Jillian said uh the colleagues joining me from EFF our EFF civil liberties director David Green our associate director of policy and activism Catherine Trenacosta and our international policy director Christoph Schman so thank you all for joining us especially Christoph uh it is very late at night in his time zone so we're very glad to have him here uh so let's kick it off with a question for David um recently EFF has written a lot about the takedowns in the US and we've discussed a lot of facets of that here today uh and it's been interesting to see some people and some supporters ask us EFF why have you given up on free speech and at the same time we've been seeing other supporters who care so deeply about this saying EFF why are you still such free speech absolutists so David what is going on here well everyone's right but everyone's wrong it's um you know it's it's not it's not unusual in any free speech dispute that there are going to be free speech interests on on every side i'm not saying on both sides because these are not there tends to be multiple sides this is a many shaped free speech disputes or many shaped objects um and uh you know and so uh you do tend to be in a situation where you are looking at free speech interest all over the place and trying to decide which is the most urgent one to respond to and then also looking for is there a unifying principle around this whole thing and i think what we saw sort of early in january was like a thousand things happening at once you know it's it's you know it so it certainly was a challenge i think in terms of your messaging to try and respond to one thing and also you know and to try and respond to everything at once what we saw i thought was a really good example of where um where we needed to look at how important was the free speech rights of these private companies vis-a-vis free speech rights of individuals vis-a-vis free speech rights of governmental officials right and these are so these are these are really complicated things i think you and and so that's what we try and you know sort everything out so when you're talking about one you were talking about well we think certainly under us legal doctrine the companies have a legal right to moderate their sites um you always have to follow that with the but we are very concerned with the power they exercise when they do so right yeah you know and then you throw in the whole government thing where you have them you know where they have them shutting down the government speaker and you start thinking that well how do we feel about government speakers you typically free speech disputes look about are trying to prevent state control of private speech and not and and it's a very scary thing from a free speech perspective to think that the government could force a private actor whether it's an individual or a company to carry the government's its own message or just its approved message you know at the same time it was really scary to see the power that sort of the company's exercise and and how much where we are where we invested so much um so much authority in them so you know these are really you know it's these are really tough issues I think all around so in some ways I really appreciate all the all the comments but we certainly I can assure you we certainly care about free I mean our concern for free speech is sort of the driving principle absolutely to add another layer of complication the next question is for Catherine another question that we've seen recently and that you and I have talked about at length is how these recent takedowns have or have not changed the game in censorship online you know we see a lot of commentators calling trumps takedowns or parlors takedowns unprecedented saying that they are setting a new standard whether that that is good or bad to the people commenting on it so are they unprecedented are they setting a new standard what is your take on that Catherine they're not unprecedented um if Jillian were here she could like lay out the entire history of the internet and tell you that that is that is not the case um they're not unprecedented they never are that's the thing that's really important to understand when we talk about these things is that they're not unprecedented it's just that attention has never been paid before if that makes sense generally speaking when this stuff happens it's been happening to other groups for a very long time it's just that they are not a group that press has cared about or has been in the news or um has the kind of weight to throw around or the lawyers to sue that some other group might might have it's not it's not even unprecedented generally speaking it's not even unprecedented on on whatever level of the stack you are talking about it may be unusual um and it may be unusual in this it may be unusual in the United States but it is definitely not unprecedented absolutely and I appreciate that distinction between unprecedented versus certainly unusual but that doesn't mean that it's entirely new I think continuing to look at this globally I appreciate you pointing out Catherine right that it's might be the first time that Americans are seeing this with members of their government or things that they follow but it's not new to a lot of other countries in the world so going to Christoph our international policy director to continue looking at this globally we saw that the removal of Trump's posts online by these big platforms was questioned by a lot of political leaders in Europe where you are very much leading EFF's policy strategy so can you tell us more about that and what what you take from that sure um I think that there's an overall consensus in Europe that those big platforms should be able to moderate content because they consider problematic but what happened is when Trump's account was disabled many thought irritated that actually it's only very large ones and only us-based platforms that make all those decisions and they have power to change political discourse and there were many concerns raised that there's no legal framework in place in Europe that requires platform to be transparent about those decisions or to give users rights if something goes wrong and what is going on now in Europe is that the European Union is working on the creation of such a legal framework which is called the Digital Services Act the DSA it will deal with transparency current moderation liability of platforms for user content and many many other issues so you can imagine that this new bill the Digital Services Act will have a huge impact on what users can say post or share online and EFF is working quite a bit on that one and we are glad to see that many of our suggestions made it into the first draft of the Digital Services Act so like the principle that platforms should not be liable for third body content and that they should not use upload filters those sort of things and if this bill becomes law platforms will have more options to voluntarily act against problematic content but they will need to be very clear from the start about how their current moderation systems and they will need to act responsibly if they decide to remove content and which is quite a noblem I think is if mistakes happen users and this may well include politicians like Trump will have the right to see their content and their country instated so if this should happen again to leaders of government that would be fun I think and now what we are going to do is to work with the European Parliament to ensure that the DSA does not follow the footsteps of some terrible bills we have seen in the past like the corporate directive if you think of the article 13 corporate upload filters and there are many groups out there which will try to make the DSA another censorship machine so what we are doing what we are going to do is to fight against all that thank you Christoph and you mentioned transparency we got a question in the chat from the audience a bit earlier that I think we can address here about transparency specifically someone asked on transparency would the Santa Clara principles help and they also asked for the previous panel involved in operations but I think this group has some visibility into this as well what challenges on the operations side for a company can we imagine or do we anticipate might make adopting those Santa Clara principles with regards to transparency harder does anyone want to jump in on that well I can I can jump in on the will they help I mean I hope so we were spending a lot of time on them and so I hope they'll be helpful I mean what you I think before the Santa Clara principles grow out of an effort to because what we kept on saying was even if there's no legal requirement under US law for these you know legal restrictions on on the on platforms to moderate content they should do so there are certainly really really big free speech implications when they do so right and so we want they should do so with a human rights framework and then the question was well what is that human rights framework and so the Santa Clara principles are the effort to put to devise a human rights framework for content moderation they're very broad but they're not at least in their current form are not highly specific but they are designed to at least give away a thinking about how do you do content moderation that is that is respectful of human rights broadly and free speech principles specifically and they're also designed to be an international document and not to be something where we're only looking this at either at US law or European law exact or whatever so I so I hope they're I hope they can be helpful and transparency is you know is one of the components is certainly is you know is one of the three main components of them um I just want to hop in and say that in my experience for basically any decision a platform makes transparency would be helpful um and I don't just say that as like someone who's trying to analyze what's going on I say that whenever I talk to and I do this a lot because of Twitter um whenever I talk to someone who's had something go down for any reason copyright uh terms of service violations whatever most of the time they have no idea what's happened they have no idea why it's happened and they have no idea what their recourse is if there is any what they're supposed to do and some transparency from the services would really help people figure out what to do and also like how not to run afoul of these policies um so a lot of the times transparent I like in in my view transparency would help the other thing that is transparency not just when people are kicked off but from the beginning like a terms of service that everyone understands that is very clear that an average person could read and say like okay I know what this service does and does not allow and that also goes in with something that is also part of a sort of our broader statement which is consistency which is if those rules exist but they are only applied sometimes or they're only applied when a story is big in the press then that's not really a rule and how are you supposed to know that you're supposed to follow it can I just add one thing yeah just I mean to circle back with what we talked about before you know we talked about you know that this these sites have a right to do their thing you know to to moderate they want I think why the advantage is of that for users is that you can go to and pick and choose the sites that you want because you're because you like their style you like their editorial viewpoint you want you'd like you want them to curate content in a way that you want as a user but none of that works unless you know what their style is right unless you know what their policies and what they're going to have an editorial viewpoint we're going to have curatorial policies like we shouldn't know what they are that's just users aren't going to benefit from them having those if they don't know what they are and if they're not clear and enforce consistently I am so glad our conversation has gone here because it leads perfectly into my big question for Catherine which is how does competition play into all of this right we're talking about transparency we're talking about platform policies and how users are supposed to know what they're signing up for is it a coincidence that we tend to be focusing on these huge market dominant platforms when we talk about these takedowns yeah so my general opinion is that my goal in life and I think a lot of you guys could share this with me that when I go to EFF every day for work I did not have to care about Facebook if I could never think about Facebook again I will have considered my job complete the problem is that Facebook is billions of people and because of its size every decision it makes is outsized and important and because of its size there's no possible way that it has rules that make sense for everybody like it's just impossible and the size as a metric is itself just dangerous I think in the previous panel people talked a lot about the business model and all of these companies have put growth and size as their selling point you talk here this is you have access to this giant audience the problem is that then you run into the problem of there's no choosing a platform that works for you there's no you just have to be on Facebook you don't really have a choice I talk to small business owners in my neighborhood all the time many of whom want to be very ethical and they say like I would love to be off Facebook but I can't I can't be a small business and not be on Facebook it doesn't you you just won't exist same with Google same with a lot of these services and if we had many services with different rules and people were picking the ones that worked for them as sort of David was outlining before then we'd have an internet that actually functioned for most people rather than one which like forces you into Facebook or forces you on to Google and I know that 230s real sexy and people love to talk about it and that when you're getting hauled off a platform screaming free speech is like cool and edgy but it's competition the problem is competition the problem is that if you get shut out of like five services you're effectively cut off from the internet and and if being kicked off of Facebook didn't matter then we wouldn't have these kinds of conversations and so well when I get kicked off which is I assume is inevitable I will not be screaming about free speech I will be screaming about antitrust that'll that'll be what you hear me as I get bodily hauled from twitter shaman act she screams in her final way well so you alluded to some policy questions with 230 and I want to ask Christophe about this looking at both the US and the larger global policy space right these are huge platforms that operate in a lot of countries not just ones where section 230 is a thing uh so we've seen of course a lot of discussions in the US thought whether section 230 should be amended but it also seems that other countries around the world have started to propose their own new rules new rules excuse me for social media company so Christophe can you first explain how some of these legal regimes and how they might change are quite different from the US and also returning to a question uh from Javier from Access Now in the chat a while ago how do you see the issue of content moderation concerns with these varying norms and laws around platform gatekeeping power in different democratic countries around the world a lot to cover but we know you're up for it well the the situation is complex um whatever try to break it down um if you take the the full spectrum of liability regimes where we have on the one on the one end the idea that platforms should never be held responsible for the speech of users it's idea of full immunity that we have under US law then if you go on the other end and we would see the idea that platforms can potentially be held liable for all sorts of user conduct without requirement of fraud or knowledge which is the chinese system and then we have all the systems that somehow sit in between like the european union system where liability depends on whether the platform had knowledge about any conduct if you look into all that then we see that there is an international trend going on from passivity to its activity so what we see is that governments start to regulate platforms which are much heavier hand and want them to be proactive governments have this tendency now to tell platforms you must have systems in place to check if illegal content appears and if this is the case you need to take a down order reported to law enforcement authorities we see this in many um jurisdictions in the laws of germany or new zealand which has set out systemic duties for platforms and then we have other governments of other countries that think about how to best exercise control over platforms and speech for example we have the turkish government that now wants platforms to be very responsive and to take down content on demand and then there are new rules in place that require companies to store data within the country and to appoint a local representative so the authorities have a grab on the platforms if something goes wrong and they don't comply with orders and then on the other side we have the government of poland that does not want platform to take on any content that is not explicitly leaving so for example it would be illegal in the future if twitter took took down hate speech against hgpq communities because bolishla would say it's not illegal in our country to be hateful um against those communities so you see that oh you may understand why we are very worried about those developments particularly in countries where we don't have any um independent traditional oversight and to perhaps to get back to the question of of um javier and from excess now um but it's not a problem that there are so many different legal approaches and especially in democratic states and different ideas of common moderation i fully agree that it's not easy um different countries have a different sense for justice different countries have a different idea of responsibility of platforms over content even the idea of freedom of expression or freedom of speech is not quite the same everywhere in some countries it has a strong positive side meaning that states need to protect users from interference even by private companies but i believe that we have common principles and i believe that many users around the world struggle with the same problems like what was said before facebook is everywhere so users around the world face or struggle with the same problems if the facebook does not respect their privacy or if they are blocked unfairly so for me the obvious solution is not to enter into a weird geopolitical fight over who is the better sensor is it the states or is it platforms but rather to focus on hey can't we give more power to users actually and to give better rights and autonomy to users and this includes options to break out from those data silos platforms have become which for which we need antitrust so that would be my answer we need to focus on actually not giving more power to the platforms but that's giving more power to the users thank you and those are such rousing words to take us into looking at some more of the questions from the chat during this session um first i will address perhaps the most important question on any ff video call how many cats are there and what are they doing i tried to set up a virtual background i'm so sorry but the cats like awake hours tend to randomly be whenever i'm on video call there's two Catherine has one christoph i believe also has one who's much better behaved clearly so we hope you enjoy the cats cats are truly the purpose of the internet they they keep us going in our work so another question perhaps more pertinent to our topic at hand um is uh someone in the chat uh apologized if this is off topic but i think it very much relates to what we're talking about and an example right of how we weigh these different concerns that the panelists have brought up uh we heard in the chat uh i've been grappling with the danger of vaccine misinformation on social media versus the danger of censorship is there any way to mitigate both concerns i think i'm going to throw this one to David uh because we actually called her to post about this a little while ago so David what do you think yeah and i was going to see if someone can i hope someone has found a link to the post and post into the chat so this is one of those really uh this is where there's really sticky issues where i don't think there's a there's not a um a great answer i this is one of those times where i'm actually think that it's good that private companies have the ability to adopt piles adopt various policies regarding misinformation and i think that we saw we saw a bunch of different approaches now of course it would be better if there were like a hundred companies doing this and not like five uh but you know so that um what we saw both with you know in the us with election information with coven health information with and and the sort of the vaccine information thing has been an issue for for many years now was sort of an experimentation with how do we address misinformation broadly and what do we do when we think that because of this moment in time this inaccurate information is is especially concerning um and so i think we've seen a lot of attempts to you know labeling um you know account suspensions um none of these things that especially suspensions i'm especially comfortable with um the ability to the ability to of government certainly the state to insist that false information be removed is something that just historically has been used as a tool of oppression i think everybody on this you know on this knows that right you know you that it's a very dangerous tool to hand the government the power to say that you know we're what we believe isn't true which tends to be things critical of us needs that you know can be punished um so it's it's but um yeah so it's hard to see a really strong state role here what i'm really interested in seeing is how effective some of the what what researchers hopefully they'll be given the access to the information even what the companies say about how effective they think um some of their what their strategies have been either around the election the us or they did they even had more uh different you know it ramped up these policies with respect to the ugand election whether they thought that their policies actually helped to stop the spread of misinformation how many mistakes they thought they made um i think it would be i really hope that we get some examination some data back from so we can see whether these were particularly effective or not can i i just want to say yeah things real quickly on this topic which is as david said i wish we had a hundred options not five if only because if they were if everything were smaller the effect of the misinformation itself would be reduced if it were not being if you couldn't just megaphone your way to a hundred million people at a time the sort of second part i want to highlight is that i actually really like sort of twitter's approach mainly because like i know that this claim is disputed has become a meme and i think that that's proof that it worked if the meme is this thing that was said is not true then it hasn't some way succeeded in telling people that things are not true that is that is a great point that if we see something spreading like that even jokes about something elude to how it's being understood and used so we just have a few minutes left and there's one more big theme that i'm seeing in the chat both both david and christophe even kathrin i think we've all sort of alluded to it and i want to give the panelists a chance to respond even briefly directly in our last final minutes asking about the issue of legitimacy in companies versus the government right who has the better mandate to choose what stays up and stays down if a better mandate even exists we see the companies do it now do democratic governments have a more legitimate role there um also asking you know do we think on average that private companies do a better job moderating speeches than governments um obviously there's a big for any questions we've been alluding to them all day especially in this panel is there anything else david or christophe or kathrin you'd like to say to address that directly on how we have have things about that broadly i was i'm really interested in what christophe has to say about this from a non-us perspective but i could say that i i think that government always has a role because in because government always has the ability to declare speech illegal um and then there and then which might set which could send back the thing that would have to be taken down so i don't think there's really a ever a situation where government has no role at all i think when we when we think about this we really start thinking about who is more accountable and i actually think that's who's more accountable and i think that's a moving target i think there are some situations where we think that um a government actually might be better to do this because the government is more democratically accountable and i don't mean and i mean that's sort of like a case by case situation and there's some time and i think a lot of our calls for accountability in the private companies because we see them exercising so much power to me the answer is i don't even think either one of them is what we is is the best answer and i what i want to see is the private companies being less influential um and to me that is one of the things that could this gets the idea of sort of you know competitive compatibility um and having too few too few players and lack of interoperability in the space where um where there's nothing even if someone is accountable you have very little options for acting on it um and you start and because if you're unhappy with what the side is doing how it's treating your speech or how it's treating the speech you want to receive there's you have very few options for dealing with that um so i don't think i don't think we have great options either place right now um you know and again i i'm i i'm looking for state regulation of speech is not it's going to be the last place that i'm going to go personally well and i think as you say right neither of those are very good options and where i think christoph was heading was that we'd like to empower users um users should have the power to determine what speech they want to see what expression they would like to have and be exposed to um christopher catherine do you want to say anything else as we wrap up no i mean i don't want to enter into the into the subject of discussion i think i think there are different roles to play um i think the state has a certain role to protect citizens ensure fundamental rights protection right and platforms have not that role at all which doesn't mean that there's not there's certainly no overlap for example we always have said that if platforms are free to moderate content but if they do so they should orientate themselves to its you know the ideas of fundamental rights um not be discriminatory and be transparent in all that sort of things and be proportionate and only do measures that are necessary but i think the entire the idea of today's discussion of freedom of speech i mean the idea is not to call for the state to protect me against the actions of platforms i think that's the wrong idea the idea of the state is to give dignity and choices to users to enable them to access with their friends and to share their their their lives with their family members and one option for the state is to achieve that is to obligate platforms to be transparent to ensure competitive markets to ensure that the last word whether content is supposed to be legal or illegal is up to independent churches and not up to huge powerful platforms and i think that's the idea of the state you know to make sure that you know what belongs to the state belongs to the state the more belongs the platform belongs to the platforms but the problems we face i think we can best solve by giving more powers to the users and that's my honest opinion absolutely with that we are at time this has been an incredible event thank thank you three for your time thank you to all our panelists from eff from academia from industry from journalism and thank you to our audience for being with us for this last panel and for the whole event uh if you'd like to support the work we do here you can become an eff member uh join the fight get some cool stickers at eff.org slash at home that's eff.org slash at home and with that thank you so much we'll see you next time