 Good afternoon and a very warm welcome to our IIEA webinar on censorship and free speech in the digital age. My name is Joyce O'Connor and I chair the digital group here at the IIEA. And it is my great pleasure to welcome our distinguished guest Gillian York, author and activist who's the director for international freedom of expression at the electronic frontier foundation who is based in Berlin. Gillian, you're very welcome. And we're delighted to have you join us today and thank you so much for taking time out of your busy schedule to be with us. We appreciate it very much and we look forward to your presentation. Gillian will speak to us for around 20, 25 minutes, and then I'll go to your audience for questions and answers using the Q&A function at the bottom of your screen. I look forward to receiving your questions and would very much appreciate if you could give your name and affirmation when you send in your questions. We would encourage you to join the conversation on Twitter and our handle on Twitter is at IIEA. As is usual, our presentation today and the Q&A is on the record. Today's presentation is very timely. The current debate around hate speech and indeed disinformation has been well documented in the media. And here in Ireland, the proposed hate speech bill is currently going through the legislative processes. And today, as it happens, the first report on the emerging disinformation landscape in Ireland by the European Digital Media Observatory based in DCU is published. So you can see there's a lot of interest Gillian in what you're going to talk about. Gillian will outline how the internet transformed the ability of all citizens to contribute to public discourse, especially those living under authoritarian regimes. The internet, of course, has also facilitated the spread of harmful content such as disinformation and hate speech. And Gillian will take us on a journey from Silicon Valley to Tunisia to Cairo to Berlin, so as to trace the change from the early belief in maximizing free speech by the citizens and the big tech companies alike, to the policies of surveillance and control exercised by the platform and most governments. The issue of free speech is a much nuanced issue. And as Gillian York sees it, the central conundrum of social media is how to address hate speech based on her extensive research for her book on Silicon values, the future of free speech and violence capitalism, as well as her direct experience working with global rights activists who have experience of the internet and social media, she will give us a really good perspective. The internet, as I said, is both a critical organizing tool for democracy and freedom, as well as some people see it as a threatful environment and some would indeed say a vile environment. She will give us her unique perspectives and discuss how companies, platforms, democratic governments, as well as our territory and regimes are responding with increased censorship and moderation of the internet. Gillian's presentation highlights her insight into what is at stake when private companies make speech and censorship decisions that at times have for some been a matter of life or death. I argue that more protection of citizens is needed against the harnessing of our personal data. One commentator indeed has said, and I'm sure you've heard this before Gillian, that your insights make us all think twice about signing those terms and conditions that when we use their sites. We also assess the negative consequences that can result and explain how citizens can respond. As I said Gillian is the director for international for international freedom of expression at the electronic frontier foundation, a fellow at the international human rights at the European University at the arena, and a visiting professor at the College of European Natalin in Warsaw. Her work examines state and corporate censorship and its impact on culture and human rights with the focus on historically marginalized communities. She also works on the European policy and the impact of sanctions on the use of technology. And currently she's looking and working around three particular topics, social media and conflict, adult nudity and a decentralization of social media. Gillian, we look forward to your presentation. And thank you again for being with us. Thank you so much and thank you for that kind introduction. It's really sort of a perfect lead into what I'm about to speak about. So, I'll start just from the very beginning is I think that that's a great place to start. And I like to share just briefly how I got into this space, and how, how my views have been shaped over time while bringing in all of the history that we're about to speak about. And so about 15 years ago, I was living in Morocco and trying to begin a career as a writer and got involved with some activism, some advocacy campaigns through a group called global voices. That's really how I first learned about the ways in which the Internet was censored. I experienced briefly some internet censorship myself when I couldn't access the blogging platform that I was trying to use at the time. And then discovered in Morocco that a wide variety of speech was being censored by the government, mostly touching on religion criticism of the human rights record in the country, and criticism of the royal family, as well as a handful of other things. So that was really my entry into this. And that led me to the work that I ended up doing at the Berkman Klein Center for Internet and Society at Harvard, where I was hired to work on a project called the OpenNet initiative. It doesn't exist anymore, but the website is still out there as are the three books that the project put out over the years. And the goal of that project was to create a survey of more than 65 countries and how they were controlling the internet. So I was focused with what they called filtering or what we more commonly know as censorship, and also eventually looked at things like surveillance and the cultural contexts in which this existed in those countries and that's what those three books are about. I'm happy to share that link if anyone is interested. I'm just managing this project managing the research and the way that it worked is that we actually had people testing in those countries to, you know, to technically run a test based on a list that had been created by academics to discover which websites were blocked. Obviously not going to get into detail here about the different countries, but there was kind of a through line through what we could see in Europe of course you had the blocking of child sexual abuse imagery which I think most of us would deem is an acceptable use of this tool. But across the world, there was kind of a spectrum between, you know, semi democratic countries and authoritarian countries. And typically went after were things like religions that were not that were that ran counter to the main religion of the country, criticism of the country's human rights record, sexual content nudity things like that, and at times social media. Now this was about, I think I started working there in 2007. And this was when social media, or at least web 2.0 was a brand new concept. And what we saw then were that some countries, the ones that I can recall up the top of my head that up notes for Turkey and Thailand. We're already blocking sites like YouTube or at least trying to block individual videos. And in these cases, the two that I clearly remember in Thailand, one of the cases involved a video that was critical of the king and specifically had accused him of something possibly false. In Turkey, it was jokes about at a Turk that were inappropriate and illegal under Turkish law. And so both of those governments went after YouTube and blocked it. And as a result that really changed the way that we that we researchers there and I think the way that governments looked at what was possible with respect to social media. Again, this was when it was brand new. Because these platforms didn't want to be blocked in these countries, they actually changed the technical dynamic to enable the governments to request that certain videos be removed or blocked, rather than having the governments block the site entirely. And so that was one thing one major shift that happened around that time that has really shaped the way that governments can interact with these companies so they can request information be removed or request information about users and these companies have entire departments to deal with that. So that was my first, my first, you know, awareness around this. The second thing came for me in about 2010 while I was still in that position, a young man, a blogger that I knew from Morocco contacted me and said that he was being censored by Facebook. Now, I wasn't really sure what he meant and thought how could a company censor someone it's a company it's not a government. But what we what we discovered was that his page which was calling for the separation of religion and state had indeed been, or sorry, religion and education, had indeed been removed by the platform. So I wrote about it and what happened after writing about it was that I ended up starting a relationship with Facebook that continues until this day, where we were able to raise these issues with them. And I started to see things a little bit differently, that although social media platforms were companies in some cases private in some cases, you know, public companies with shareholders, they nevertheless had an outsized impact on free expression and once you start looking at these things you start seeing everywhere I started researching and and reading and found that there are a lot of complaints along these lines. Now again this was 2010. So this was just before the Arab uprisings which is going to be the next, the next element of this talk. These companies were pretty small. They had really small staff. When people reached out to me from Facebook they use their real names I was able to find them the individuals on LinkedIn or find news articles about them. And all of the content moderation. I'm not even sure we really knew that term at the time, but all of the content moderation was done by humans with seemingly little consideration for how it might or might not scale. And at the time, most of it was also done in Silicon Valley so it was a very narrow field. And yet these people had an impact on the entire world. Of course in the research for my book I found that these teams were in fact mostly American, largely white. They were diverse in terms of gender surprisingly but nevertheless these people were making the rules for what the entire world could say. And so that brings me to the Arab uprisings. So just a couple years prior to the beginning of the Tunisian uprising. We saw the role that these platforms could potentially play in people fighting back against their governments in these movements. We saw it with Belarus and with Iran in 2009. But these were a little bit different, much smaller scale and a lot of the media criticism about them at least or about how social media played a role was that it was largely people outside of the country in the diasporas who were engaging on social media and reporting what was happening doesn't make it any less impactful but it was a much smaller scale let's say. But by 2010. People had in Tunisia and Egypt a lot of people at least had phones in their pockets. Twitter had enabled SMS to tweet so you could actually just send an SMS so you didn't have to have a data connection. And a lot of people had home internet. I can't remember the statistics but they were pretty high numbers at least. But when the Tunisian uprising started these companies really had no idea the central role that they would play people in Tunisia had been blogging for a long time and in fact I. I think one of the things that led to the election is that it was one of the countries to have the earliest blog network all the way back to the late 90s or early 2000s. But still, there was no real anticipation of what would happen. And next, and so when the Tunisian uprising spread to Egypt and then to other countries, I from my perspective because I was in touch with them at the time. Social media platforms are really taken aback by their own scale. They were surprised by how how important of a role that they played. People took to the streets on January 25, but there is one little backstory that I just like to share here because I think it really does illustrate the issue in November 2010 over American Thanksgiving weekend. I got CC on an email thread with some other advocates in the US saying that there was a page in Egypt, a really important page that had been taken down over that holiday weekend. And that page was actually the one that would eventually call for people to take to the streets on January 25, it was called we are all had said, so named after the young man who was killed by police in Alexandria that summer. It was Egyptians who are running the page and the reason that Facebook had removed it was that they'd broken a rule. They were not using their real names for safety reasons probably obvious safety reasons. And so we scrambled, we advocates scrambled to get Facebook on the line over a holiday weekend, we were actually able to reach some of the highest executives in the company who restored the page under certain conditions, and that's shared in my book but those conditions were basically that someone else stepped forward using their real name to administer the page. That page fortunately went back up and fortunately enabled people to to a call for this up. That, you know, was many years in the making of course but may not have been coordinated so well without Facebook and without Twitter. And the Egyptian government responded as people went on the streets on the 25th they blocked Facebook that day. The next day they blocked Twitter, you probably all know this story but the day after that they blocked the entire internet, with the exception of one isp. One ISP enabled people to continue. I remember stories of people gathering one apartment near Tahrir Square to use the internet there. There were mesh networks on the ground. There were people with SMS of course who could still call out of the country to get, you know, to tell stories and have their friends outside the country report them to media, etc. So not all hope was lost, and the platform still played an important role. They even responded to the internet. And Twitter put in place, I think they worked with Google to put in place something called speak to tweet that allowed people to call up and leave voice messages that would then become tweets. And so the companies were actually very responsive to what was going on in the ground and as I'm sure you might remember the narrative around all of this from media to academia and everywhere governments even the Obama administration, for instance, the narrative was that these platforms had in fact enabled revolution. In some ways that was true. It's hard to say without without this style of mass media, how well these messages would have spread. Of course we've always had protest, but in an authoritarian country where where gatherings are monitored where protests are banned or restricted. And so, early on before these governments were incredibly responsive to it. The platforms enabled people to at least figure out who each other were. It allowed information cascades to happen. And that's how I see what happened there. And so there was all of this hope after that. But I think in the next few years things very quickly turned and I think that's where we're where we're getting to next. And so while the world was seeing these platforms as a means to conduct political advocacy or even topple dictators, authoritarian countries and democratic countries alike, we're taking notes and figuring out how they needed to engage with platforms and where they needed to step into control them. And so in those ensuing years we did see a lot of these platforms blocked. We saw a lot more demands on them to remove certain content. We saw that SSL encryption HTTPS as you know it became more common and more popularized. We saw the ability of governments to block individual videos or individual pieces of content, Wayne, because it's, I'm not a technologist so I don't want to get this completely wrong but it's very difficult to block an individual piece of content over an SSL connection. And so the demands to the companies the back channels grew. And then after that what you had by I would say by 2014 at least you had government or you had companies publishing transparency reports, sharing for the most part what governments were demanding, how many pieces of content they were taking down and this to a large degree give users and you know and the public the media a better sense of what was happening but it wasn't all it wasn't all good and we're still struggling with that I'll come back to that when I talk about the Digital Services Act though. So, 2014, I mentioned as the point where transparency was becoming popularized. I see 2014 as a turning point for something else. And so I hope over the prior years I remember going to lots of conferences, a lot of talk about how these platforms would help. And then around 2014 three things happened, as I see it. One was there was a lot more pushback around harassment on these platforms and a lot more demand for platforms to do something about the growing sense of harassment that was occurring. There was a lot more pushback from what where I see it came seem to come from the US, but in fact, these were global conversations and there was a lot happening behind the scenes, organizing to try to figure out how to make this, how to make these platforms safer, particularly for women and other marginalized communities. This was the rise of the Islamic State 2014 was the year that James Foley the journalist from my home state of New Hampshire was beheaded by the Islamic State and that image spread across all of these platforms, and sort of shocked them into doing something about it. In that particular case, his family was able. I'm going to fail to remember the detail here but his family was able to advocate on the grounds of privacy that the videos and photos should be taken down, but curiously. The platforms actually allowed the still images to stay up when they were published by official news media, again, making the decision about who was official and who wasn't and doing that for the entire world. So we started to see this stratification happen over who could publish on these platforms when something was controversial. And the third thing which is probably obvious is the rise of the populism and specifically the rise of the far right, and the alt right in the US as it used to be called, and Trump, of course, and I think that that those three things all happening at once pushed a lot more pushed governments to democratic government sorry to be more responsive to these threats and to put more pressure on the companies. So around 2014 we started to see the Obama administration talking about these companies needing to do something about the rise of terrorism and then far right extremism. We saw more conversations happening in Europe around hate speech and the threat of populism. And of course, the conversations around harassment actually did push companies into action. There was a time that Twitter created the trust and safety council which was recently disbanded under the new musk administration, let's say at Twitter, but which for many years brought together a wide range of organizations from all over the world to have more of a say in Twitter's policies and to consult with the company. It wasn't all that although we're seeing you know more more misinformation and disinformation, more hate speech, more harassment more terrorism on these platforms. We were also seeing the platforms engaging more with civil society. At conferences that I attended they were there to listen to people's concerns. Some of them created user groups like Twitter's trust and safety council, or consultatory arrangements with different civil society groups, many of which were under NDAs, non disclosure agreements but nevertheless, which existed and allowed groups from all over the world to have an actual say or consultatory role in policy, and in how content moderation worked. At the same time, as these platforms are growing content moderation is becoming increasingly complex for them, because when you only have a million users let's say and that's still a pretty large number. It's not that hard to scale how content moderation works. It's always been. Well, until recently rather content moderation on the platforms was always a reactive role so you'd have a user report something that they saw that they believe violated the rules that would then go into a back end queue where a content worker either in Silicon Valley or at a third party companies somewhere else in the world would have to make a binary decision over whether that piece of content should stay up or come down. But as the platform scaled content moderation had to become more complex. On the one hand, they had to scale the number of human moderators that were working and so that of these companies wanting to save money started hiring third party firms in countries like the Philippines where they could access cheaper labor. It wouldn't be a few more years until that conversation would become would come to light a bit more about how problematic some of this was. They also had to institute algorithms or automated technologies that would identify certain content and either strike. Or at least identify it for the human moderators. And then there's also a diversification of the actual decision at the end of the process so whereas it used to be a leave up or take down binary decision companies, especially YouTube I believe pioneered a lot of this made other or created other ways of moderating content so whether it was demonetization or what we now call shadow banning what we now know to exist. We've always denied it for many years shadow banning of course being sort of hiding things from search or hiding them from your feed while still allowing them to exist. And so this became much more complex and much more opaque for the rest of the world. And I think that that's when a lot of the conversations in Europe really started in earnest because, although you know there was obviously a lot of awareness around the world over the issues with platforms, moderating speech or censoring speech. I think that this is around the time where the conversation kind of shifted into the problematic speech that platforms were hosting and how they weren't responding to it in the ways that they needed to. And so that's when Europe I think really started to play a bigger role. And what we've seen in the past few years is a range of different methods of trying to regulate speech. And so I believe that in Europe I've lived in Europe now for about eight years. And I believe that there is a really strong awareness amongst civil society here and amongst some governments at least of the fact that this is sort of a two sided problem that companies do have an outsized role that can be seen and that they do have the ability to censor even government officials as we've seen in Germany, or sorry at least politicians sorry. But also that there's, you know, a wide range of problematic speech that is enabled by the platforms and has obviously created huge problems for democracies. So, one of the first attempts I at least from my perspective is the Network Enforcement Act in Germany which holds companies liable for the speech that they that they host for hates for sorry for illegal mostly hate speech that they host. This is one method that's become really popular throughout the world and in some cases in a democracy like Germany can be fairly effective. It was criticized widely including by me when it first came out but it hasn't created the problems that we expected it to. In fact though I would say that it's, it's, it's been mildly useful. It's the issue with it of course is that it still puts the onus on the companies to know and determine what is hate speech under a given law. There's also been issues with the reporting. So when you use Twitter in Germany, for example, you can only report hate speech in the German language, which obviously in a country that has welcomed so many migrants over the past few years has made it really difficult for many of them who are targets of hate speech. But another issue around it is that the, the, the, the penalties on the company, rather than on the person engaging in hate speech and so prior to this law. Of course the government had to do or the judiciary rather had to do a lot more work to go after individuals but the individual be penalized and there's plenty of research that shows that individuals are more likely to be deterred if they're actually held liable for engaging in hate speech. When the company is the target they pay a fine and that's really it. Of course, I see this law as also being problematic and in the sense that it's been replicated in a bunch of countries that are less democratic than Germany and abused by governments to go after things that are not hate speech but rather things that they be illegal but are protected or otherwise protected speech. The Digital Services Act though is a really different approach. And I think that there's a lot of good in it. There's some negatives as well but I'll mostly focus on the good. So in one way it creates a fast track procedure for law enforcement to take on the role of trusted flaggers. So they can, they can they're basically given a fast track to be able to report things that are illegal. And so, you know, again, in democracy this is a good thing. It also preserves the EU system of limited liability for online intermediaries, which means that the platforms, unlike the American Enforcement Act platforms can't be held responsible for user content, as long as they remove content that they actually know to be so as long as they remove it when they're alerted to it. They're not responsible for proactively removing content and EFF sees this as a good thing and I think so it is most of civil society that we work with in Europe. What one other thing that the DSA does though that I'm really happy about is it has a strong emphasis on greater transparency and user rights. So it has requirements on platforms to explain their content curation algorithms in more detail in user friendly language and in all of the EU languages which is fantastic. I'd love to see that of course expanded to more languages but for now, great start. It also, you know, it aims to ensure that users can better understand content decisions which are often seen as arbitrary and how they can pursue a path of recourse reinstatement so appeals. So this is something that we've been focused on with, with a large group of civil society organizations from all over the world for a few years now in the Santa Clara principles for transparency and accountability and content moderation which I think eight groups and individuals first created and then we did another consultatory process with I think more than 50 organizations in 11 or so countries, and which we put out a new version of last year and you can see those at Santa Clara principles.org. And I think that's because we did bring those to the minds behind the DSA and believe that they were taken into account in in creating these transparency measures. There's a lot more there of course I feel like explaining the DSA is probably not the best use of my time right now. And it's also quite a complex set of a piece of legislation but I do see it as the right approach, looking at, you know, going. I'm going to check my time before I talk too much for how am I on time. You're doing okay. Yeah, great. Excellent. So yeah so going after harmful illegal speech, while also having limited liability for platforms when they do their job, and giving users more rights in the process and I think that this is a great model for how we can look at this globally. And just, you know, the negative side of the DSA is that it is just for Europe, and a lot of the people who are most impacted by bad content moderation decisions whether again a failure to take something down or a failure to keep it up are in other parts of the world where far less attention is given by these companies. And we see this disparity all the time, even just in the recent what's the word insurrection thank you in Brazil. There was a lot more attention paid to the same thing when it happened in the US but in Brazil, from what I've heard from my colleagues there the companies have not given a lot of a lot of resources to two people there who are very concerned about how platforms were utilized in organizing that. So that brings me to the three topics that I'm kind of thinking about at the moment there's a lot more. Or but I tried to narrow it down as three kind of examples of issues that are ongoing right now that I think that companies need to be more responsive to or are starting to be responsive to at least in one case. So one is the issue of conflict zones and extremism or terrorism. The Christ Church massacre a few years ago resulted in the creation of something called the Christ Church call, which works with the platforms very closely to ensure that they are being responsive to to terrorist content on the on their sites. This is by and large a good thing but one of the issues here that has been raised by a lot of civil society groups in which I've worked on a bit is that the use of on the platforms in this practice and the scale at which it's occurring makes it very difficult to determine what is in fact a piece of terrorist propaganda versus what is a journalist or a user documenting a war crime to put a really fine point on it. And these, these videos these images, the text have actually been used in war crimes tribunals there's an example of a few years ago, where a YouTube video was utilized in a case where a Libyan general was prosecuted in the week. And I use this example, because I think that without if the companies were doing a better job of preserving this content at least on their back end, I think we would see a lot more of these cases where this type of content is used to to aid in prosecutions, but what the companies are actually doing in many cases is disappearing the content. They're not simply, you know, penalizing it. They're throwing it down and essentially throwing it in the trash. And this creates an untenable situation for human rights workers who rely on these platforms sometimes in low bandwidth situations to quickly get content out sometimes even in a live stream format. So I think that that's one really strong example of a pertinent issue an ongoing issue where companies are in many ways under pressure by governments and therefore failing to, you know, to pay significant enough to tackle concerns of people working in conflict zones. One example, another very, very different example but I think it's actually an interesting one because it does have one of the same back end issues and I'll explain that in just a second. Adult nudity or nudity by large pornography all of it really sexual content. This is an interesting example because child sexual abuse imagery has long been dealt with by using automation it was one of the first content categories to have a fairly automated response. And this was through a tool called photo DNA. That allows companies to run, I'm going to explain this in very non technical terms, but basically allows companies to have something running on the back end that when a child sexual abuse image comes up on their platform, it can be matched to an image that's based by law enforcement, and then automate automatically taken down this means that the content workers don't have to see the image sometimes these images are of course, penis and traumatizing so that's a very good thing. And it also means that anything that's put into the database by law enforcement can then you know be rapidly taken down. Obviously this doesn't cover everything but it's a good tool. This tool was actually modified to be used for extremist content. So that's one of the issues. Now, that's just child sexual abuse imagery but of course we have the issue of consenting adults wanting to share things whether we're talking about sexual content or things like breastfeeding images or photos of gender affirming surgery or mastectomies for breast cancer etc etc. These platforms because of the fact that they have to be responsive to illegal pornography child sexual abuse imagery and a very wide audience from all over the world with different cultural values. Typically have banned most nudity from their platforms because it's easier. This is my perspective at least, because it's easier to take it all down then have to decide what is consensual, what is adult, etc. Now don't get me wrong, these are very hard problems. So AI can't easily detect between a 17 year old and 18 year old to use a blunt example. But at the same time, I think that it does restrict freedom of expression, whether we're talking about art or political protest or just someone wanting to share their body. So the Facebook over the meta, excuse me, the meta oversight board, which is the separate body created by meta or Facebook a few years ago to able these content decisions and has a board from all over the world with different backgrounds including lawyers technologists etc. So they're currently overturned meta's original decisions so their content moderation decisions in cases involving gender identity and nudity. So this is very specifically related to the issue of people who are transgender non binary being treated as as women or as female in these decisions, because of the fact that they have a certain body part. So basically what the oversight board has said is that it's unjust to to treat them, you know, to basically misgender them in these cases and identify them as women. And so this is sort of, you know, meta is now enabling change in their policy because of this. And basically, you know, it's, I think they're tackling a really difficult issue. And one that, you know, it's in due time for them to have taken this on I'm happy to answer more questions about it. I'm just aware of the time so I'll jump to my last thought and then kind of give a couple closing remarks. Decentralization is the other topic that I'm thinking of a lot and I won't get into this one as much because it's still very new topic and one where I think I'm still mulling a lot of this over in my head about how it works. And we've had decentralized platforms or at least I've known about them since social sorry decentralized social media platforms since around 2011 when a tool called diaspora came out for a bit and never really got popular. But the conversation started for me then, and that conversation being one of what if we all ran our own servers and what if we were the ones who decided what was allowed on social media platforms instead of, you know, these major companies and their shareholders. And these sites existed for this whole time sites like mastodon. And for those who aren't really familiar with them I can say that basically the way that this works is kind of like email with email you might have your employer's email you might not your friend might have Gmail another person might have I don't know Yahoo. I hope not but maybe they still have it. And you can all interact with each other, even though your emails on different servers. And it sounds really similar to that in that I can have a server you can have one that your employer has one, and yet they're all interoperable they can all interact across the different servers, and you can each have your own set of rules, just like email providers each have their own set of rules when it comes to spam. So, Elon Musk buying Twitter kind of created a really rapid rise in mastodon's popularity and I don't know the user growth stats but they're pretty incredible. And now people are kind of seeing these things as real possibilities, but from my perspective I think that they also raise new questions about what content moderation cannon should look like, because just as it's problematic from Mark Zuckerberg to make the rules about what I can say on his platform, I'm also not thrilled necessarily with the ideals or ideologies of some of the people who run these servers. I've actually seen. I've had lots of conversations with people that I work with, especially in the US, especially black communities in the US, who feel that they're being kind of tone police told not to talk about racism, and things like that by people who just want their servers to be happy and friendly. And so that's just one, you know one example that I'm seeing but you can see how this could create other issues, including for some of the issues that I talked about, such as you know not wanting people sharing things coming out of conflict zones like Syria. So I think that that kind of brings me to the conclusion and really the big question at stake with respect to hate speech but with respect to really all of these issues, which is who should decide. And obviously this is a question that we've been dealing with since the beginning of time or at least the beginning of thought around freedom of expression. But I think that, you know, there's something else at stake when it's private companies that make this decision, these decisions, because these companies have, you know, their primary motivator is money, and their primary currency is your data. And so there, that's what they're operating under. And when they're making content rules or content decisions they're not doing it with freedom of expression, or their users best interests in mind. They're not doing it while trying to raise their profits and also please everyone else. And so in some ways decentralized platforms could do a better job of this, but will they do a better job at regulating hate speech. I, you know I think that it's possible that a lot of servers could be conscientious about this, but at the same time how could they scale. And how can they respond. Well, how can they know all of the laws for one thing and German law and hate speech is pretty complex I've tried to look at it. But then, you know, Germany is just one country, how could they know all of the world's laws, and how can they make these decisions. And so I don't want to say that this is an intractable problem but rather I think it's one where we need all hands on deck. And that's not what we have right now right now we've got, and again I say this not insulting Europe, but I do see Europe making decisions for Europeans, while having an outsized impact now on platforms and on the entire world. And so I think it's really vital that we're also bringing people to into the discussion from other parts of the world who again, often have more at stake, the state, the stakes are a lot higher when you're living in, in a cultural context, where the threat of violence every day. Is real. And I'll kind of close with just a thought that I heard in a podcast yesterday, but from someone I know personally, and this podcast was talking about the Facebook oversight board and how some of the, the test runs around its creation were made. I was actually at one of these. There were seven different sessions that they did before the oversight board was was formalized. And in these sessions, they had people like me, lawyers. and psychologists. Given examples where we had to make a decision about a piece of content based on what we thought was right. And one of those examples and this was talked about in the podcast so I don't have to worry about non disclosure agreements was the phrase kill all men. Mark Zuckerberg talks about it and he says, Well, yeah, you know, we did, we did censor this but eventually we came to the understanding from you know these these consultations with people in the US and in Europe that kill all men was usually used as a, you know, just an expression of frustration and a form of punching up because men of course, you know, being dominant throughout the world. But there's a woman that I know, but I'm tired from Ethiopia who spoke in that in that podcast and said, Well, that, you know, in her context coming from Ethiopia, she doesn't really agree with that. And the reason is that the punching up punching down dynamic although she understands it of course and agrees with it in theory that in a conflict zone, or a politically hot zone even that the dynamics of what is punching up and what what is punching down shift rapidly. When you have two groups that are in conflict with each other, one might be in power one day one might be in power the next day. And for me that you know I've obviously I've thought of this before in some way but her comment really stuck with me. And because I think that that is the problem with enabling authorities to make these decisions without the without the, the input of global constituents. We live in a globalized world and while we don't have a form of global democracy that works for everyone I think that this is a point in time where we have to consider speech decisions and policies around speech as something that that needs to be personalized so I will close there because I think we've got plenty of time for for comments and questions. Thank you.