 Welcome to Mozilla. It's lovely to have you. It's lovely to see this space bustling with conversation already and look forward to this evening's program. I'm Ashley Boyd and the Vice President of Advocacy for Mozilla. And probably many of you think of Mozilla by our famous browser, Firefox. Stay tuned next week. We'll have a new version of that so we expect all of you to download it. But many of you may not know that Mozilla is owned by a non-profit, the Mozilla Foundation. And that's something we're really proud of. And in fact, we're guided by a manifesto. I joke that I always wanted to work for an organization that had a manifesto. So finally, I have one. And the manifesto includes a number of fantastic, important principles. And if you haven't looked at it before or lately, I encourage you to do so. The second principle is the Internet's Discussion. It says the Internet is a global public resource that must remain open and accessible. And I feel like that principle, like all of them really, is sort of deceptively simple. We know in practice that that doesn't just happen on its own. It requires real stewardship and attention. And in these times, we see places around the world where freedom of expression is under threat on an individual level. Online harassment, hate speech, fake news, censorship, and even violence are some of the biggest threats to our Internet and some of the most important Internet policy issues of our day. And for that reason, we're really excited to join our friends with Wikimedia Foundation and International Justice Resource Center to welcome one of the world's foremost experts on free expression, David Kay, who is the UN on Freedom of Expression. So thanks so much for joining us and I will hand the introductions over to Jan from Wikimedia to continue this. Thanks so much. Thanks so much, Ashley. And thanks to Mozilla for hosting us tonight. My name is Jan Gerlach. I'm a public policy manager at the Wikimedia Foundation. We are the people who these days are sending you an email about free information and free information. We have a new version every second. So maybe don't go download it but go visit it. Sorry, it was such a good segue. And well, I really don't have anything to add there. The open Internet freedom of expression, it's all under attack as we like to say every day. And with this series, Free Open Shared, a series of conversations around collaboration, policy and knowledge, we want to raise awareness for things that are going on on the Web in knowledge and in policy as the name says. We kicked this off about a year ago and this is the first time that we're actually offside, not in our own office. So thank you so much for hosting us tonight and we're really happy to have David Kay here and to be co-hosting him also with the International Justice Resource Center and here to introduce David a little further to you is Sitlali. Hi everyone, good night. My name is Sitlalia Choa. I'm a staff attorney at the International Justice Resource Center. We're an SF based international human rights organization that works with advocates here and around the world providing informational resources, trainings and technical support. And it is with great pleasure that I introduce David Kay UN Special Repertoire and also my former professor at UC Irvine School of Law. He is not only a great professor but also a huge advocate for freedom of expression. His work has included reports on encryption and anonymity, whistleblowers and sources, contemporary challenges to freedom of expression in the digital age which is very relevant to our discussion tonight and is currently working on a study of content regulation in the digital age. So without further ado David Kay. Thanks everybody for coming. The last time I spoke was actually at the Foreign Correspondence Club in Bangkok and they have a big bar in the back so it seemed to go well so I needed a drink. I really don't mind if you get up and get a drink. It's all on Yochai tonight. I want to thank Mozilla, thank Wikimedia, thank IJRC for hosting tonight's event. I'm going to try to I hope I won't talk for too long this picture here I think symbolizes or at least yesterday symbolized how I felt on the internet. I mean I was saying to a couple of people, I don't know if you saw this story in The Times The New York Times on Sunday about YouTube kids and the channel and how some pretty obviously disturbing content I was getting through filters and for whatever reason after I read that I just felt a little bit overwhelmed. I mean I felt really that the issues that we're facing because that was just one more issue in kind of the drip drip drip of other issues we see in which there are negative stories about what's happening in digital space and even if that story didn't quite get at the truth I think the emerging narrative about the internet is that it's a dark space and that even though I write that story the journalist probably had to use the internet quite a lot the story is that we're seeing by and large about how digital space is a place of threat to civil society it's a place of censorship and it's a place of deep concern for parents for minorities for journalists you name it so on the one hand in the next maybe of talking and people should feel free to as I said get a drink or interrupt me with either a heckle or a question during this I hope to try to do a few things so one is to give us all a framework to think about internet issues or digital age issues and the framework is a human rights law framework I'm guessing that most if not everyone in the room knows what that framework looks like I'll go back to it just to give us all a reminder in part because my mandate is rooted in human rights law it's not rooted in First Amendment law and I think that actually human rights law offers us a lot in terms of how we could think about some of the issues we're dealing with today after that I'm going to kind of go through an awful catalog of censorship in the digital age none of which really looks all that pretty I'll do that and then at the end I'll kind of ask how do we think about this what are the things that we might do going forward in order to improve the situation even if they're really asking whether there is room there's certainly room to improve but how do we do that so let me start with the framework so everybody should know Article 19 not just the organization based in London but the state organization but the actual article in the International Covenant on Civil and Political Rights the ICCPR to which the United States is a party and something like 163 states are parties to I think it's the shared vocabulary for thinking about freedom of expression worldwide when you go to Europe where the European Convention is very similar to Article 19 when you go anywhere else around the world they don't care that Congress shall make no law they care that everyone shall have these rights and so just to go through them very quickly everyone shall have the right to hold opinions I think the key here for most of my work is this particular part right so if you think about it in contrast to the First Amendment which is which says Congress shall make no law focused on the institution human rights laws flipped around right and it's about our rights right it's not about just whether governments can do X, Y or Z it's about everyone having the right to freedom of expression which is the freedom to seek, receive and impart information and ideas of all kinds regardless of frontiers through any media I mean I say this all the time but this is language that's written for the digital age right the regardless of frontiers through any media it really gives us open space to think about what we as individuals have a right to seeking, receiving imparting information you could think of it in internet terms, browsing searching posting right it's the language of the digital age now paragraph 3 usually I try to go over paragraph 3 fairly quickly paragraph 3 is where we see the restrictions that states may impose on freedom of expression and footnote this only applies to freedom of expression it doesn't apply to paragraph 1 which is opinion right opinion cannot be restricted in any way maybe later we could talk about what that actually might mean there's not a lot of law around that in particular but article 193 says that states can restrict as long as they meet three conditions right so one the restriction has to be provided by law which typically means it can't be a law that provides limitless discretion in the executive branch for example or law enforcement to impose restrictions and the restrictions must be necessary we read that as necessary and proportionate in order to achieve one of these three specific objectives respect of the rights or reputations of others protection of national security or public order or public health or morals this is the framework that I want us to be thinking about as we think about what restrictions are possible and lawful in a digital age now one other article is worth mentioning it's very common for us to say or for me to say there's literally no such thing really as hate speech as a matter of law but actually article 20 in particular paragraph 2 focuses on issues around hate speech and it says advocacy of national racial or religious hatred that constitutes incitement to discrimination hostility or violence shall be prohibited by law so states are under an obligation to prohibit this kind of advocacy the second thing is that it's not enough to say that hate speech should be regulated so hate speech on national racial or religious grounds but it has to be advocacy that constitutes incitement what is incitement to discrimination or hostility may be difficult for us to really define but it's certainly clear in the context of incitement to violence so before we get into into digital age I have a few words about censorship generally and I'm going to use censorship, small c really as tantamount to an interference with the freedom of expression I just wanted one word to actually say that it won't work all in every example here but generally I want us to be thinking of restrictions on freedom of expression in digital and physical space so online and off in many respects don't vary from how censorship has always been so if you take a look for example at Thailand and this is just from a couple of years ago maybe 18 months ago where there was a story in the New York Times about the sagging economy in Thailand which appeared online but when it went to print Thai authorities demanded that the Times not print that particular story so the Times printed the issue of the paper that day and just left that entirely blank that's just regular old censorship it doesn't look a lot different than what we've seen before I'm going to pick on Thailand once again because attacks on journalists which are rampant and awful we've seen at least 30 journalists killed for being journalists this year alone attacks on journalists have continued again online and offline and here the Thai leader said just soon after he soon after the coup really he said in response to a question about whether the government would do anything about journalists who report negatively about the coup he said we'd probably just execute them he doesn't have a real sense of humor apparently so this was taken pretty seriously as it should other areas sedition around the world this is Zunar a cartoonist in Malaysia he's under a travel ban and faces a trial that could lead to over a dozen years in prison Zunar whose cartoons are really sharp is charged with sedition for the content of his of his art really we also see criminalization of false information I'm not going to go through this way too much text for a slide but just know just take it from me or take it on my word that around the world we're seeing laws as we've seen for decades that criminalize the dissemination of false information I'll come back to this when we're thinking a little bit about false information and fake news but these kinds of laws have been on the books one of I think the worst things and this is something that's been kind of percolating up for many years is the redefinition of journalism and the redefinition of dissent and criticism as terrorism this is Asla Erdogan and Nechmi Alpi two writers really important intellectuals in Turkey they have been charged they were in prison they're now out on bail awaiting trial they've been charged with membership of an armed terrorist organization disrupting the unity and territorial integrity of the state making propaganda for a terrorist organization why? because they were involved in in a foundation essentially that supported Asgur Gundem which is a Kurdish newspaper in the southeast of Turkey so they are being charged and face life in prison and charges in Turkey for their work as journalists as intellectuals but they've been redefined as terrorists so that is all happening in offline space and what we're seeing is that's moved into online space as well the Human Rights Council which is the central human rights body of the General Assembly central human rights body of the UN which has appointed me to this position and has appointed another 50-52 special procedure positions has said for quite a number of years now that offline rights apply online as well well unfortunately the flip of that is the same which is that offline repression is happening online as well so now I want to just talk a little bit about places that will not be new to you in this room but I want to talk about different ways in which we see censorship perhaps having a little bit of a changed face when we're talking about online censorship so one is the scale so it's possible now if we're talking about for example the Great Firewall of China which we call China's ability to restrict the ability of its citizens or anybody in the country from accessing websites around the world I just tested this yesterday I mean you don't need to have an advanced degree to know that Facebook isn't accessible in China but this is the kind of scale that the digital age has allowed in the past of course we've had blockings and jamming of broadcast and other forms of media this takes it I think to another level we see internet shutdowns a little shout out to Access Now right here which has been really a leading organization in pushing for activists, for journalists for governments to recognize that we're in the middle of really an epidemic of internet shutdowns where governments will force telcos and ISPs to shut down the internet for a variety of reasons sometimes it will be for reasons having to do with national exams so they don't want any cheating sometimes and more likely it's in the context of national protests or regional protests where for example if you're in Kashmir for the last 18 months or so you've been extremely lucky if you've been able to get online because India is bringing down the internet on a very regular basis so this again this is censorship at scale and keep in mind when we're talking about either blocking like with the firewall and sort of the mass filtering or we're talking about shutdowns of the internet we're also talking about shutdowns of mobile communication we're talking about shutting down not just access to particular information but the ability for people to communicate with one another Human Rights Council actually stepped in here and said it condemns unequivocally measures to prevent or disrupt access to or dissemination of information online etc it's not having much of an impact though we're still seeing a real uptick in internet internet censorship now we're also seeing now this is something that's obviously offline and that is state propaganda and again I'll come back to this a little bit in talking about false information and fake news but of course at scale the ability for states to propagate information that is either misinformation or propaganda is pretty substantial and so I think we can tie this back also to sort of the non internet non-digital forms of censorship another set of problems surveillance and insecurity right so one problem that I think is very clear of course in some revelations we're quite familiar with the level of surveillance that takes place in the context of Five Eyes of the developed world's ability to have access to all of our communications there's also a kind of democratization of repression and that is the ability for essentially western companies to sell their surveillance spyware technology and this is really just a handful of governments around the world that are using that kind of technology in order to interfere with the expression of individuals now you could say well this is really about interference with privacy right that's what surveillance fundamentally is about getting into your private lives but of course our ability to express ourselves our willingness to express ourselves depends on having a kind of private space and security in order to do that and we see that also in the context of states around the world undermining encryption and anonymity tools so over time while the digital age has offered us so much access to information and so much possibility of connection with surveillance with other forms of censorship we've also seen the dark side of how that works and of course there's also things like DDoS attacks which hit all individuals but especially hit civil society pretty hard one of my earlier slides identified work from Citizen Lab Citizen Lab has done a lot of really important work based looking at how civil society organizations, human rights activists are particular targets of that kind of digital attack now that was kind of looking at scale also censorship now is very targeted and digital space makes that kind of easy I would encourage people to look at Twitter's most recent transparency report because it's really useful in showing the extent to which states governments are seeking access to information but also are seeking to take down specific content so if you look at the removal requests between January and the end of June of this year I sort of organized this which the transparency report allows you to do according to the number of removal requests and you can see here that the first column talks about removal requests by court order Turkey 715 removal requests by government agency, police or other almost 2,000 from Turkey and you can go down the list and see the others as to how governments are going to companies and asking them to remove content, to take down accounts and so forth now that's understandable because the companies operate in the context of local law and their ability to operate may depend on whether they actually adhere to local law but in addition to these kinds of removal requests which are generally focused on perceived or alleged violations of local law governments are also demanding that the services the platforms remove content that is inconsistent with the terms of service so you could imagine situations where the government itself has no legal authority to take down particular content under its own law maybe because it adheres to human rights law so in those kinds of situations they may go directly to the companies and say we see violations of your terms of service here abusive behavior promotion of terrorism where you'll see the percentage of accounts action was 92% where I'm guessing that there's some over regulation going on there because terrorism so I think as we think about the kind of targeted censorship that we're seeing in digital space it's not just because governments are saying you must adhere to local law it's also governments saying to the platforms you must adhere to your own terms of service or we're going to push you to adhere to your terms of service so for the last set of issues I think these are in a way the hard issues today and I think we're still at an early stage of understanding the impact of private content regulation on digital space certainly I think we're at a very speculative place in terms of understanding how the company's own community standards in terms of service may actually have some influence on how we think or how our publics think about freedom of expression so I just want to identify a few different questions here obviously everybody remembers because it's come up as an issue again I mean it's been with us since a year ago in the election of Donald Trump but you'll remember right after his election there was a kind of initial freak out about fake news and I found this really interesting in this editorial from the editorial board of the Times you know basically putting all of the onus on the platforms to regulate and there's good reason for the platforms to be monitoring the space and perhaps to be doing more than they've done up to this time but I thought this was a particular statement of that showed the pressure that I think is going to be increasing I mean I think if we think about the last week and some of the platforms being up on Capitol Hill I think that's only the beginning and here surely it's the Times saying surely it's programmers speaking of Facebook can train the software to spot bogus stories and out with the people producing this garbage you know just some sort of magic technology dust and the problem ends but it's obviously not that easy because there's problems of over regulation there's problems of remember around the world countries are criminalizing the dissemination of false news so this kind of attitude which obviously at some level is real right it's important that companies deal with this problem but it's also going to feed into I fear the repression and well the repression often hits journalists and others who are simply sharing information I think we don't yet know the extent to which the policies the community standards the terms of service of the companies are actually having an impact on freedom of expression I think at a minimum the companies need to be transparent about how their policies operate anything so that we can understand how much censorship is actually out there how much regulation is taking place on platforms now whether human rights law as a matter of law actually applies to the restrictions that the companies impose on their own platforms is one question I think the clear answer is human rights law doesn't on its own apply to the platforms which the platforms should be applying standards that are rooted in protecting individuals rights to freedom of expression I think is an important conversation for us to be having and that it's not a conversation that should happen only in the context of sort of the crisis of the moment right a couple other I think looming threats that are out there one is the right to be forgotten and I think this points to something I think the platforms in particular should be very alive too and that is congress is probably not going to do all that much I mean when has it done all that much so we can be afraid of some of the rhetoric coming out of congress right now but how much they'll actually do count me as a doubter Europe on the other hand has its act together basically and it has its act together in a way that I think will undermine some basic rights or at least has the potential to undermine freedom of expression so one is the right to erasure or right to be forgotten that was adopted by the european court of justice just a few years ago and let's put aside the debate over whether an individual should have that right to be forgotten I think that it's actually a pretty interesting debate and there's no clear answer around that here's my concern if you look in the center of this Google says this is when you report to Google that you want to be you want to link delisted they say we will balance the privacy rights of the individual with the public's interest to know and the right to distribute information this sounds a lot like what courts do and yet as with other areas of law in Europe we're seeing european institutions essentially outsourcing to private actors the regulation of content do we actually is there like a supreme court reporter for all of the takedown requests of Google and all of its decisions is there some database that gives us a sense of precedent and how the companies will act in the future and this isn't the only space in which european either governments or the european commission is focused right because Germany just adopted a law that came into effect at October 1st I think which essentially puts liability on the companies to remove content that's inconsistent with german law right so we're seeing I think a crack in the foundation of the regime of intermediary liability or the idea that the platform shouldn't be liable for the content of third parties on their platforms right so this is I think really the first major state law in Europe that will start to undermine the principles of intermediary liability and at the same time put on the companies the obligation to remove unlawful content and of course if you're facing the fines that are imposed under this law the obvious incentive is to take down more content to air on the side of removal and not on the side of the fine there's a lot happening in Europe and we should in some ways be more focused on the trends in Europe even than we are on the trends in the United States ok so I think at this point it's very easy to imagine like the Rembrandt painting from before this is a real mess it is a real mess I think that censorship is rampant and this and this talk is kind of the tip of the iceberg in a way there's so much more happening and there's so many individuals around the world who are being criminalized for their posts on facebook or twitter or vcontact if you're in russian speaking space or if you're on wechat somewhere else there's just a whole lot of expression around the world so it's very easy to throw up our hands and say particularly in an environment where frankly we can't rely on the United States as much to be a champion of freedom of expression around the world we have to think what can be done about this so I'll just offer a few ways forward and then close that was supposed to come down a little less at once so one is for private companies for technology thinking first and foremost about the overarching question of the legal environment what are the rules that should be applying I think that the companies would do well to think about rules that protect individual users' rights I think they speak that language and I think even some of the content regulation or moderation that the private platforms are developing and imposing also to a certain extent protect individuals' rights it protects the space the more harassment and abuse there is on the platforms the more vulnerable people will be actually encouraged not to participate in those platforms so it's not just a simple question of ensuring that they're not taking down any content it's finding that line in which they can actually apply human rights laws standards even if they don't do it as a matter of obligation where they can justify taking down content or justify their policies on grounds that really fit into article 19 as it stands now much of the content regulation that we see is kind of bilaterally influenced meaning if you go, if Turkey is putting pressure on you you'll take down content in another country where you don't get that pressure maybe not, so the rules are looking pretty fluid from state to state and I think having a general set of rules that applies across the board would serve individual users but also serve the companies as a form of protection I think there needs to be a lot of thinking about user autonomy and control I think companies are moving towards that and speaking about that but it can't just be autonomy and control it also has to be education about those tools, they need to be user friendly because they're sometimes just hard to access right now transparency it's kind of a mantra at a certain level, it sounds meaningless but what it really means is the companies need to be clear about the standards for taking down content but even more than that I think they need to be clear about the examples of content takedowns right, we need to know more because they're essentially not just making rules but they're adjudicating we need to know more about what that adjudication looks like and then finally I think that unless the industry as a whole and I know that's hard to think about the industry as a whole because the companies don't always think of themselves in the same industry but to the extent you're a company that is regulating content I think it's important to start thinking about self-regulation as an industry not just self-regulation company by company but start finding ways almost as we think about press councils around the world which is a form of press regulation that doesn't involve governments in many parts of the democratic world and in the developing world I think we should start thinking about ways in which the companies can regulate themselves and do so in a way that's open and transparent and avoids government regulation because I think government regulation is almost always resulting in over-regulation so if we're thinking about states and the regulatory environment I think one key key thing is for us to continue to advocate against intermediary liability I think it's pretty clear that the fact that companies whether we're talking about startups or the big behemoths of today that they weren't and aren't held liable for content on their platforms has been part of the engine for innovation and also part of the engine for activism around the world and too often when they're put in a position of liability it really falls hardest on those who don't have other platforms to share information to protest, to criticize and so forth I think the states governments need to create, maintain and promote a legal environment for freedom of expression that means all of these places where there are laws that are applicable in offline space but also online that redefine journalism as terrorism information and so forth those need to be repealed and we could go further and talk about things like anti blasphemy laws there's a very long list in which the space and legal framework for freedom of expression needs to be improved state by state I think there needs to be a recommitment to basic rule of law principles so I think it's basically inappropriate for governments to be saying to the companies you figure out what's on the lawful side and what's on the unlawful side I think certainly this is something for courts to continue to have a role in and the more we outsource this or governments outsource this to companies the less we'll know about what kind of expression is legitimate and what's illegitimate and then finally states could think about ways and some of this is actually in the German law that I mentioned before the SDG law think of ways to push companies toward more transparency toward public reporting if we think almost similar to SEC kind of reporting requirements there could be more information that is required of companies to be transparent but without the hammer of liability how that actually operates something we could be thinking about but these are ideas that we're looking through I think and I don't as I said maybe that again the Rembrandt at the beginning hopefully the message there is I don't have the answers but the situation is grim out there and we need to I think recommit to the basic principles of human rights law which is to protect everyone's right to freedom of expression freedom of opinion and to ensure that that protection isn't just what governments provide also I'll stop there totally happy to take any questions criticisms you name it thank you so we have 3 mics in the room one here one over there and then this sweet orange cube here you can throw it alright thanks for coming what role do you see for ISPs like one level down from the actual content platform role where there's tensions between on the one hand being sort of a dumb pipe that's just supposed to be doing throughput and on the other hand potentially a role there in upholding freedom of expression given that that's how you turn off the internet is by leaning on ISPs not on platforms that's a really great question so actually my last report to the human rights council focused on what we call digital access industry which is basically infrastructure and I mean around the world in a way it's a lot harder for telcos and ISPs to push back whether it's in the context of you know filtering or blocking or you know entirely taking down content right they're in a hard position because they're under licenses in a way that most of the platforms aren't it's really difficult I think there are a few things that ISPs and I rope in telcos and also footnote the clear distinction between like content providers and telcos and ISPs that's blurring a lot now right because the industries are getting into each other's space pretty regularly now but I think one thing is real clear transparency to their users about when they share information so when they share user information with the state so that individuals can actually know when they go online particularly if we're thinking about surveillance that you know one of my risks is that this ISP could share the information with the state I think there should be more pushback you know buy ISPs and telcos when they face demands for shutdowns or filtering that's really hard so it's hard for them to say no again because of licensing and domestic regulatory rules but you know there's a lot of foot dragging and questions that they could be asking of governments when there are demands for user data and when there are demands for filtering and so forth so they clearly have a role to play in this space I think they're in a harder position but they there are some steps I mean those are just two that I think they should be doing more than they're doing now you want me to I can just point people share that yeah definitely what do you think about the sort of lack of better term conflict of laws questions so you talk about getting more getting the courts into the system more but that also then ask the question of which courts and if the ISP is going to accept a court order from the United States so they accept one from France accept one from China, Russia Uzbekistan, Azerbaijan and which ones given that those those laws can not only be very different but sometimes mutually exclusive my favorite agreement is the map between Pakistan and India where they can have completely different laws that require you to do the opposite thing and then still leave stuck with the content provider almost having to make a moral question about which laws they follow and is the moral opinion of accept the United States or accept France different than not accept this particular law in India and how do we deal with that sort of given your context of getting courts more involved and it's a really good question I wish that we didn't have time for answering it no I mean I think so I mean two things so one is that so I think that the role of the courts in particular is important maybe to step back a little bit because it's important for the companies I think to respond in principle when it comes to takedown requests only to court orders or other legitimate administrative orders of the state I think there needs to be some process so that the companies are not in the position of they get a call from a security official and says we want you to take down this content I think I'm just more comfortable and I think the companies are more comfortable that's my guess if it comes through judicial channels so that's sort of at the level of principle the second layer down is like what should those rules be based on and I think again this heads into maybe a little bit of an ideal perspective but those court judgments in those cases and orders should be based on human rights law right and so that should be the standard across the board and I don't think that the companies are in much of a position to say no if they get a legitimate court order from a state to take down the content I mean I think that if it goes through regular process it's simply very hard to say no now that leads to the fundamental problem that you mention and it comes up in a bunch of different ways right it could come up in law enforcement contexts and there's actually an effort which you might be familiar with the internet jurisdiction project which is seeking to answer and maybe even create a kind of structure for particularly for law enforcement requests to make sure that law enforcement requests for the take down of content are consistent around the world and so there isn't the possibility for this kind of position where if I do this in compliance with your law I'm in non-compliance with this other law so there is some thinking around that basically I think that the companies if they have a legitimate court order this isn't going to be a very satisfactory answer right now and I don't think I don't think I have another answer for it but if they have a legitimate court order to take down content I don't think they have much of a choice to do it at least in the context of the jurisdiction where the court order comes from so it's like a geo-located kind of take down now what we're headed to I think is the real hard case which is which is going to be decided probably next year in Europe where France in the right to be forgotten context is saying we want you to take down content not only in France but globally so we want global delisting and the natural result of that is going to be conflicts of laws around the world so I actually don't have a really good answer to that conflicts question it's one of the reasons why I think global delisting is really problematic it's one thing to talk about delisting according to a particular jurisdiction but for any government to say this should apply globally is going to lead exactly to those problems in addition to other problems of freedom of expression so I know I'm not really getting to the core of the question but that's essentially where my thinking is right now there was a hand up here easier question so I was curious I've heard it in several conferences recently people talking about Silicon Valley companies being treated like common carriers like a utility like ConEd or something like that and potentially regulating them as such and treating them as an agency so I'm curious so this is actually a two part question so I started that way so I want to hear your it's not an international law question but I am curious about the input on that subject and then subsequently it seems like from what you've been saying you're more comfortable actually with government regulating speech than you are with private companies to a certain extent and I'm just sort of curious like whether you're more concerned today with tech companies than you are with government regulation to a certain extent when it comes with online speech so I'll answer the second question first so I wouldn't say that I'm more comfortable with state regulation as I said I think almost all state regulation results in over regulation what I would like to see is is there to be more judicial process so I mean real judicial process around the takedown of content rather now at least where there are complaints about illegitimate takedowns of content I mean there's of course going to be rules around some of the most kind of manifestly unlawful content that companies should take down if we're talking about child exploitation for example but I do think courts should be in the position rather than companies when it comes to imposing some of the harshest kinds of penalties otherwise we'd lose a sense of the public sphere being regulated by rule of law basically so that's where I come to it's not really just that governments are better I don't think they are but I do think the courts need to be involved so on the question of common carrier treating you know some internet companies as utilities you know I haven't gone that far to think through all of the different ramifications of that but I would put it into two different contexts where freedom of expression is really relevant to that question and so one is network neutrality in zero rating where I think that that the companies you know particularly telcos are really providing they are very much like utilities in that space and I'm really concerned particularly about the FCC's proposed rule changes around network neutrality because you know that will lead I think to basically the prioritization of media and the prioritization of basically content that can be that can pay for it and so I think that you know in that sense certainly telcos and ISPs have a public utility kind of role in this context and should be treated that way thanks oh in my in my hand I'm pointing so yes Eileen thanks Sasan so I'm going to go back to you said really want to root things in human rights long when you started with article 19 you went through you know paragraph one paragraph two you said I like to skip over paragraph three to the extent I can and my background is a media lawyer and I think there really is a clash of values between some of the privacy stuff certainly in Europe and where I come from probably the one place where I buy into American exceptionalism on the freedom of expression side coming from a media law background that being said if you look at article 19 do you find in human rights law a basis for the right to erasure because of that third paragraph how do you how do you weigh that well so article 19 three you know one of the grounds for restricting expression is you know where it's necessary to protect the rights or reputations of others so that's where if I were you know lawyering for what's gotten as a matter of article 19 that's where I would that's where you find it right you don't have to go to article 17 which provides a right to privacy in order to get there I mean it's it's it's not like there's a necessary or built in clash within article 19 between expression and privacy it allows for that kind of restriction I think my concern I mean two separate concerns so one is on the necessity front right there are other ways and remember right to be forgotten isn't about taking down content by the you know the media company itself it's a delisting so if you're a data processor like Google was found to be you need to take down the content I think that I think one of the problems is a necessity there are other ways for individuals to to protect their reputation or other rights rights to privacy even in the context of continued listing of a particular issue and we can talk about what those might be like but some of those ideas we're seeing in the disinformation and fake news sense like some of those ideas are portable to right to be forgotten kinds of issues as well I think the other problem is the proportionality of taking of delisting content around this and in many of the cases that we've seen reported about where individuals are seeking delisting it seems disproportionate to the impact that the actual continued listing actually has on their rights or reputations so I mean I think you can argue within the context of article 19 you know for a right to be forgotten but also I think there's a clear case to demand that restrictions and I would call this a restriction on expression need to meet these necessity and proportionality standards that they don't necessarily meet so that's how I would get to it here I mean I think the right to be forgotten just a broader point on the right to be forgotten is in individual cases it may not look so terrible the problem is in the aggregate the taking down or the delisting of you know quite a bit of information that might be useful for like a historical picture of a particular place or for researchers to be able to study trends in particular areas so I think it's really important to think of it from stepping back and when you think of the tens of thousands of requests for delisting which tend to focus on you know the relevance of information in a continuing way I think we're losing that broader picture even if you could make the case for an individual's right in a particular case Hi, I'm Aviv Avadia I'm a director project of the Center for Social Media Responsibility which is just formed out of Ann Arbor among other things and I guess I wanted to ask you about things that were not quite takedowns or delistings so things like the auto plays that show up after you watch a YouTube video or the ranking in a news feed or in search I guess so it's not a delisting and I apologize if you already covered this to some extent because I was actually talking to someone at Twitter about these issues and they're meeting at night What did they say? We're talking about sort of the misinformation and sort of the ways to address that which is sort of in a way that doesn't lead to censorship and doesn't protect free expression but there is sort of this line around these things and so I guess that's one set of questions so how do you deal with these things that aren't black and white and the other thing is what about the other side of this right to be forgotten do Russian Twitter disinformation trolls have a right to be forgotten which is that is actually a major problem in this space is that they don't exist on the internet anymore there might be some people who have cached the data they can't even say that they have it because if they do then they violated Twitter's terms of service and then they get booted and they no longer get any Twitter data they can't do any more research in this space so I guess those are the two sides of that coin I'm not sure I understand the second one quite as much as the first so the second one is right to be forgotten means that you have to take down that content after if there's a request to do so that's my understanding of that to some extent that's one version of it basically it's a delisting a delinking so it's not an expunging from the internet permanently or from all private databases I mean for all intents and purposes I mean what you're going to go to you know El Pais and do you know go to the microfiche I mean people probably don't even know what microfiche is anymore I guess the reason I brought it up is because the argument that Twitter might make this is just the public policy statement is like well if a tweet is deleted then any person who has that tweet stored locally from their feed must delete that tweet and so that means that if I am a Twitter troll and I you know spread some disinformation campaign and then I delete my account two weeks later that is expunged from all traces online period and offline in theory would it stay in the way back machine for example the whole point of this is that they don't allow their policy is actually to remove it completely and it's to protect these exact values so if you wanted to delete something from Twitter then it's gone so let me let me just get at the first one more than the second one maybe but I actually think there's a lot of things that companies can be doing in this space that short of like if we're talking about misinformation on the first side that's that's short of kind of a really handed censorship of information and I actually I sort of I'll take it face value the statements that most of the companies have been making that they see it something to solve I mean for these purposes and you know there have been some interesting ideas out there you know they go to providing people with more information about URLs about sources information around flagging I mean I think there are some interesting things out there that that will at least you know hopefully not give too many excuses to governments to start actually criminalizing in this space so you know and I think those ideas are out there and a lot of people are talking about them so I won't go through through them here I mean I think maybe to get at your second question in a little bit of a roundabout way I think that one of the problems with the right to be forgotten is that it is it's kind of infecting a lot of thinking about you know different contexts of our own personal information especially information that is truthful about us that's online beyond the context in which it arose in you know in Google Spain right in the 2014 case before the European Court of Justice so I think that it's becoming more and more common to think that well if this information is just irrelevant or you know it doesn't reflect me anymore for whatever reason that there's a right to take that down that I own information about myself as if every person you know is a an abelist celebrity who gets to you know litigate their image I mean that's just I don't think that's plausible so you know when we're talking about you know the kind of situation you're talking about it's a problem for Twitter but I don't think they should go down the road of once information is up they should you know taking it down I think that's a problem but I'm not sure I'm totally getting and we could talk afterwards about the direction that you're heading in your question sure yeah Hadar? You've got the box The cube I'm sorry I'm wondering about the intersection of the ruggy principles on business and human rights and the guiding principles on business and human rights and the emergence the very slow, slow drip drip emergence of you know some kind of format for a treaty and how that may be intersecting or not intersecting around self-regulation and all of the things that you were talking about So for people who aren't familiar with it so the ruggy principles are the UN's guiding principles on business and human rights and in fact the work that we've been doing over the last couple of years on you know thinking about the private sector in the digital age is kind of on the foundation of the ruggy principles which on the one hand assume that human rights doesn't apply directly to companies but also I think makes a pretty good case that in particular sectors of our economies companies stand in between governments and individuals and have the potential and even the responsibility to respect the rights that those people enjoy and so I think a lot of our work again goes back to the way article 19 is framed which is everyone enjoys these rights and so in part in thinking about these rights it's important to think are companies doing things that interfere with those rights are they doing things that make it harder for individuals to enjoy those rights whether in the context of government imposed restrictions or in the context of their own terms of service and community standards I mean I think that's one way to think of it what those don't answer is whether you know the platforms have space that we should think about as public space and I mean I think that's kind of a fundamental question because the more and there's some variation from jurisdiction to jurisdiction you know in some places there actually is competition among different places where you can express yourself in others like in Myanmar you know it's Facebook or nothing so that's the place to express yourself if you're in digital space so I think there's some like that that's a kind of a fundamental question as well yeah yeah no I would say that the companies can draw upon both you know the ruggy principles but the way the ruggy principles are bringing in things like article 19 and they can construct rules around around those rules yeah so maybe one more question if there yeah I want to ask a question that kind of gets that we talked about with hate speech and and I guess incentivizing people to violence so the internet is incredibly powerful due to anonymity but then the paradox of that is there's lack of accountability for those words and you may say something and they'll mobilize people to go overboard to what you originally intended and maybe someone could get hurt like I just want to see what your opinion was on accountability yeah I mean you've hit on a real I think a hard problem for advocates of anonymity and I'm an advocate for anonymity online I mean I think that first off there are things that companies can be doing to provide more tools for for users who are the subject of hate when it's kind of a targeted kind of hate you know some of that is is blocking some of it is more extensive you know group blocking that some of the companies are doing but it's not that easy to do all the time so I think there are steps that can be taken but I also think that actually human rights law provides a way of thinking through these things and that companies can can actually benefit from from using human rights law to think through how to to deal with problems of anonymity or anonymous hatred or anonymous incitement to violence anonymous abuse on their platforms and and I think the way they can think through that is article 19 says if it's you know provided in their rules let's say and it's necessary and proportionate to restrict that kind of expression in that context you know and they make that kind of analysis I think that's that's fair for them to do and they they should be doing that now does it mean one of the problems with the proliferation on some platforms of anonymous accounts is it becomes a little bit like whack-a-mole and it becomes very hard to hold any particular account holder or account accountable for that kind of abuse and it's hard to stop because of that kind of proliferation and I don't I mean I think there are some tools in technology to track that proliferation which we see used in the counterterrorism space and probably can be applied more in this kind of context as well but I think there are a lot of tools that companies can use again that are rooted in law even if they're not saying again that they're bound by law to do X, Y or Z but they're rooted in law so they have a kind of semblance of legitimacy in human rights law I think that there are things they can do but I mean I'm saying that and it sounds very easy because I'm up here and I'm not actually working through the technology or designing it I don't think it's easy I mean on a lot of these issues I don't want to give the impression that I think it's like snap your fingers and you know you get rid of hate you get you find accountability I don't think they're easy I mean that's in a way why Rembrandt's old man has his hand on his head you know it's these really hard questions I think there was this incident earlier in the year where I think it was like the editor of an alt-right online news media maybe it was like his grandmother his mother had a dispute with a Jewish person over a piece of property and he used his alt-right platform to mobilize a lot of hateful online trolls and the result was a lot of malicious personal phone calls to that Jewish woman's cell phone to it's a terrifying weapon it's a powerful terror weapon is what we are continuing to learn with that and I don't think we should imagine that digital space is jurisdiction free so in a case like that which sounds I don't know that particular case but it sounds like it's getting pretty close to incitement of violence so there could be that governments have a responsibility to prevent violence against their citizens or people in their jurisdiction so I mean you're just giving one example we're seeing that in Myanmar right now where the government is doing almost nothing to prevent the use of digital space to incite violence against Rohingya Muslims the government should be taking action in some of these context it's not that there's because it's twitter or facebook or another platform or just another media outlet like the daily stormer they're not free to do what they want just because it's digital space if people can be identified and many times they can be for that kind of hatred that constitutes incitement it seems to me that governments should be doing their part in holding people accountable and companies should be cooperating in those cases as long as regular legal process is pursued so I think that's yeah we're good you can get a beer now you can get a beer now please join me in giving a round of applause if you can go get another beer grab some more snacks and feel free to hang out see you later is your really? yeah