 Hi folks, we have Vivage and myself Paul in today's session. This event is spearheaded by Das, an NGO from Goa. Earlier they used to do these programs called Monday Fix Goa Offline. To encourage critical thinking and boost social capital, these online programs are their new update. This is made possible by Das looks for donations to these programs and link to donate is in the start. And has Geeks page for this event. Hopefully you will find all the links in place. We are streaming on Zoom and on YouTube. Viviani is also live tweeting from the Hasgeek Twitter handle. Housekeeping rules Q&A session will happen after Vivage kind of wraps up his presentation. Please try and post the questions for people on Zoom. Please post it in the Q&A section on Zoom. I will be handling both the Q&A section in Zoom and on the YouTube comments. With that, I'm going to hand it over to Vivage to introduce himself and kind of fix the presentation. Thanks Paul. So my name is Divage Joshi. I'm a lawyer and Mozilla Tech Policy Fellow. My role basically is to look into, you know, what I'm trying to do is to build more just and equitable technology policy in India. So I study issues around internet governance around privacy and data regulation. And I'm really particularly interested in kind of, you know, unpacking platforms and free speech, which I think is something a conversation that's incredibly important and one that's been lacking in, you know, not being critically interrogated in India to a great extent. So I'm very excited and thanks to Hasgeek for, you know, calling me here. Thanks to Paul for moderating this. I'll be keeping my presentation fairly short and kind of crisp, hopefully. And then, you know, I'd love to have a kind of conversation with everybody who's watching, respond to your questions and maybe, you know, learn some things from you as well. So I'll just start off with a few stories, right, about why this is important. And we're talking about platforms and we're talking about, you know, privatized content moderation today. So I'll unpack what platforms are, what content moderation is and why it affects us, particularly, you know, sitting here in India, what the government is likely to do, and what social media companies and or like platforms are doing. So, story number one, in 2019, Twitter suspended the account of a Kashmiri media publication, Kashmiri reporter and the journalists that was associated with that for talking about Burhan Vani. This was in response to a government takedown request, it's unclear what the legal root of that request was. This is one example of the extent to which, you know, Kashmiri voices have been systematically stifled on online forums and on through social media. And Twitter's own transparency report, which again is not the most transparent report indicates that close to a million tweets have been asked to be taken down by the Indian government alone, which accounts for, you know, all the other government requests for takedowns put together. Twitter, as again as per transparency reports has mostly complied with these requests. So problem number one is of this unholy nexus between government censorship and social media censorship. Problem number two, of hate speech and harassment and unsafe online spaces. In 2019, again, a lot of you might have participated in this event where, and myself included everybody got really sick and tired of Twitter. And the fact that it refused to do anything about massive amounts of caste discrimination online, massive amounts of Islamophobia and sexual violence against women and minorities. At some point, the movement developed to, you know, mass migrate to vote with your feet and show Twitter that, you know, if you continue to do this, we're going to leave. A lot of people ended up going to this new decentralized social media network of sorts called Mastodon. That movement didn't last long. And hopefully we can discuss what the problem with that kind of voting with your feet is or, you know, what it didn't solve. Screenshot here is also Twitter's response to this entire movement saying that, you know, don't look at us. We are a political, which is a classic response to deny that any of their actions are actually political actions. Any of them have, you know, intrinsic biases or are reflective of certain kinds of logics within which Twitter and social media is embedded. And problem number three, something again very close to my heart. And I forgot to mention, in addition to being a Mozilla tech policy fellow, I also help, I also write for and contribute to the spicy IP blog where I primarily look at issues of copyright and again internet governance and intermediary liability. So, and this is something that keeps happening again and again, YouTube has a system called content ID, which is an automated filtration system. Through that system, it basically fingerprints certain kinds of musical or, you know, videographic works. And if it matches a fingerprint, which has been submitted by a copyright holder, then it automatically takes it down or monetizes it in favor of the person who submitted it initially. What happened in this case was that a JS box symphony symphony was wrongly marked as a copyright content for Sony BMG, and then was taken down. Now this was a symphony that was being used by a German professor to teach classical music. And I think it's fairly obvious that something that was about 200 or 300 years ago is very much out of copyright and very much available to for anyone to use. These stories are bound, right? I mean, from personal experience a few days ago, I tried to put up a video of my cat dancing to a black eyed piece song in the background. It was a 10 second video meant, you know, only for a private Instagram account. About five minutes later, I got a mail saying that's been taken down again, a case of automated filtration in something which would otherwise be very clearly, you know, allowed under the law. It would be considered fair use of that 10 second song clip. But anyway, it got taken down. So I've just linked you and I'll be sharing these slides later. There's many more stories about how content moderation goes wrong like this and who it harms and why it's not something that's, you know, why it's something that we all need to worry about. A couple of particularly good efforts here. The Muslim Foundation has done some work on, you know, some storytelling about how content moderation harms us. And equality labs has done a very interesting, very good and deep dive into Facebook's hate speech problem in India, where they've highlighted the caste discrimination and Islamophobia. So what we're talking about today is referred to, you know, or rather I think of it as it's called a wicked problem. A wicked problem is something that's often used in policy discourse is something that's a problem that's difficult or impossible to solve because of, you know, because it keeps changing the requirements to actually solve the problem keeps changing. And I think the takeaway for this is not to say that, you know, we have a clear solution to online harms or a clear solution to why content moderation goes wrong. But perhaps to indicate that this is a space that is vital to, you know, our democratic participation or the public sphere. And it's important to recognize the kind of politics which inhabits the space. It's important to recognize the rules through which all of the decisions about our speech or about our online behaviors are made. And we need to, you know, recognize that platforms and these different institutions and rules exhibit certain forms of power. And we should also figure out for ourselves how we can respond to this and how we can make, you know, how we can make our online lives more equitable how we can make the more democratic and participation. So that's the point of this presentation. So we tend to think of platforms as intermediaries right the term itself kind of embodies this neutral third party which is only connecting to individuals or two kind of communities together with no real role to play in between. And that's somewhat a deliberate way in which this term has been employed and used by major online companies. To make it seem that, you know, it's just two people speaking to each other there's nothing in between its Facebook is just a way to connect the world. Right. Now this is blatantly false. All platforms govern speech in fact content moderation is at the heart of what any platform does. Right. Without it its business its rationale or reason for existence would not. It would not exist. So the question therefore isn't, you know, whether platforms are governing speech but rather how platforms are governing speech and whether and how we can unpack it and respond to it. So what we need to unpack is you know how do platforms decide the rules and principles and standards which govern our speech today. How do platforms start applying these rules and how do they pass the various contexts, you know the innumerable context and jurisdictions in which these rules are meant to be applied. And primarily how do they do this at scale because I think the biggest problem or one of the biggest problems with platforms and the kind of activity that goes on through online platforms is one off scale. And just to give you a quote from a Twitter executive again is right at the top over there that given the scale that Twitter works at a one in a million chance happens 500 times a day. Also to say that, you know, even if Twitter successfully manages to say catch 0.01, you know, even if say 0.1% of all speech on Twitter is hate speech that still leaves up to, you know, 100,000 or 150,000 tweets that are abusive and harmful. Now if you need to apply a judicial logic to each of these cases to, you know, for a court to determine whether or not this constitutes illegal or harmful speech, you start noticing why this problem of scale becomes particularly pernicious or difficult. So, yeah, so I think what we need to do therefore to look into it is to see what are the politics that platforms embody within these practices of content moderation. So what do platforms do first of all, first to answer that question. I know the topic says privatized censorship, but I think censorship kind of embodies are, you know, is a larger term to be used here it's not simply the take down of content, but also how platforms structure content and all govern content in its entire life cycle. So it's about not just about what content is taken down, but what content is filtered in that who is allowed to see what forms of content. When I log on to my Facebook or my Twitter, I, you know, first of all, I don't necessarily see things in a chronological order. I don't necessarily see things that other people would want me to see. So, you know, Twitter might determine that this X tweet is of more importance to me, or is more relevant to me, because that's the kind of test that they employ. And therefore they filter that content to the top of my feet as at the same time a lot of the other content is they think that I may not be interested in will be filtered down. It's like a ranking system. And similarly, you know, what kind of speech is they allow and what kind of speech is they recommend. So, when it's a question of hate speech or, you know, different matrix speech is also a question of the question about moderation gets flipped. It's about, you know, what kind of speech is your guidelines or your institutional logics per meeting. And this is an important question because when you look at how platforms operate a lot of it is about, you know, improving engagement or connecting the world as Facebook would want it. And in that engagement, you know, in that metric of popularity, a lot of voices that, you know, are not popular tend to get left out, which is why the question of what is allowed explicitly also needs to be brought out. And similarly, you know, what is recommended when, for example, YouTube creates an automatic playlist for me, based on one or two videos, what I see how are they doing that and how are they moderating what I should see next, or what is the general kind of sphere of content that I'm leaving it. Why do companies moderate is the second question. And I think there are a number of reasons for this, the primary reason being that if companies did not moderate, then we would be, you know, drowning in the sea of digital noise, we wouldn't be able to make sense of anything that's been said. But to, you know, go a step further, it's often because of, you know, their commercial logics and responding to what their users want and respond to user research. A lot of it is about creating safer communities. You know, upholding the principles that say, again, to turn Facebook wanted to help with the Arab Spring movement and create more democratic movements, or just create safer communities, or more fun communities like say a Reddit or a Vimeo. And finally, they also do it within specific legal context, legal and social context. So often they do it responding to laws, often they do it responding to social obligations, increasingly the latter because we don't really have a legal regime to deal with this as I'll indicate later. And finally, how do they do it like what are the kind of what are the institutional and technical mechanisms by which platforms exhibit their politics. So yeah, like I mentioned, you know, there are both institutional logics, technological logics and legal logics behind how platforms moderate content. Often these are, you know, these are not clear. And I don't think there's any kind of uniformity or much clarity in how this is going on. All of these logics are messy, they're contested and they're not uniform. So but the first thing that we can kind of think about is, how are these rules which platforms are applying how do they come up with them. And I think, you know, there have been some very interesting case studies, mostly in the US, about the rulemaking procedures and the business practices of business policies. And there's some interesting examples that I've just pointed out here. So, Facebook's guidelines, you know, each of these major social media platforms, your red dates and your Facebook's, even something like Wikipedia, have community guidelines. Those guidelines kind of reflect what kind of community each of those platforms want to build. They're often cooperated within or distinct from their terms of use which are necessary contractual terms which all users are expected to comply with. So these all of these rules are not necessarily the ones that are applied because as I mentioned, it's often messy and contested and non transparent. So we don't know exactly how they're applying these. But these rules are, you know, made up within boardrooms, they're made up within, you know, by the public policy advisors to all of these social media companies. And the rules are made, you know, they're made within again that specific institutional or social context. So to give you an example from Facebook community guidelines, not the community guidelines, sorry, but the kind of the kind of guidance that they give content moderators. They have a simple kind of rule of saying that, you know, if it's a protected category, and if you're attacking that protected category, then the speech is unlawful as per Facebook. So if you say, you know, white men suck. And this is an example that they use in the slide, no offense to anybody or her, of course. But if you say white men suck, that's protected speech and it will be taken off Facebook. But if you say black children suck, that's for some reason an unprotected category. The reason being that although main certain kinds of demographic categories are protected subcategories within those categories are not protected. So white and man are protected. But children is not a protected category. So you can, you know, attack black children or black drivers or female drivers. So it's a fairly strange metric to use. But basically that's kind of that that's the guidance that then goes out to the thousands of content moderation moderators as well as the algorithms that Facebook is employing to take down content. Another thing that of course platforms, a platform regulation is based on is both legal and kind of extra legal government pressure and referring back to the unholy nexus of government censorship and platform censorship. Again, an instance from Facebook's hidden content moderation guidelines, these are not the public facing ones but the ones that Facebook itself internally comes up with and uses. Facebook suddenly decided that calls for an independent Kashmir are against Indian law, which is well kind of blatantly untrue you can, you know, holding up a free Kashmir sign is not sedition nor is it against Indian law. But as per Facebook's content moderators as well as as per their algorithms, when this kind of when these rules get embedded within automated mechanisms, all of this will be filtered as unlawful speech. Now again, there are different ways. This is not to say that, you know, the Facebook guidelines are or this kind of massive form of massive top down approach of content moderation is the only one that are different models, depending on the platform which you approach and depending on what you what kind of what you want. There are different models of how platform moderation works. So one of some of these, which are mostly exhibited in smaller platforms like say a Vimeo, or an upcoming platform or bespoke, where there's a lot of editorial control over what you get to see there's explicit editorial endorsement of content. Another one is more decentralized approach, where, you know, Wikipedia is a great example of this where certain kinds, certain people within the community are held up as responsible for applying the rules of a platform. These don't necessarily need to be employees of employees or executives of the company itself, but it's kind of more community oriented. So even Reddit to a large degree, Reddit forums are also community based content moderation, or rather rely very heavily on community based content moderation. And finally, you have these large scale industrial approaches of the kind that I've been referring to mostly. And the ones that in fact that possibly pose the greatest challenges to us. So Facebook and Twitter kind of employ these, you know, massively centralized, somewhat vague and somewhat, you know, ununiformly applied broad guidelines and rules. Another thing that's really important to point out and something that often goes missing in this conversation is that content moderation is a gigantic and continuous business, right, given maybe close to 3 billion or 4 billion people are now on online and using social media, imagine the kind of effort that goes into, you know, regulating speech at that scale. And behind that is it's not simply technology is not simply executives but there's a very human, you know, labor that undertakes the content moderation. In fact, in Bangalore itself, there are like these huge offices of people sitting in front of screens, having to look at really, really terrible content day in and day out and clicking whether this is accepted or not. And there's some wonderful kind of work that's been done around this as well as some, you know, great movies and other other cultural things that have been made. Right. So, yes, there's a lot of human content moderation that goes on, but increasingly given the sophistication of, you know, machine learning tools or AI or, you know, general kind of algorithmic tools. They are increasingly being employed to deal with this problem of scale, right to deal with this problem of like, there are 1 billion things to govern how do we do it. So, some examples of this, someone that I already spoke about a content ID and fingerprinting techniques are finished. And, you know, these are the most kind of popular techniques are so far that have been employed and fingerprinting. And what's kind of sometimes called hashing, although it's not cryptographic hashing exactly. So, is a technique where a certain kind of content, you know, gets fingerprinted and gets uploaded to a database for purposes of matching it with any kind of subsequent content. So, like I said, if you have submitted a copyrighted work to YouTube, it will hash that content uploaded to its content ID system. And if there's any subsequent content which matches the content that's already in the system it will detect it and block it. Similarly, Microsoft has a photo DNA database, a database that was mostly created for child sexual abuse imagery, which is one of the biggest problems on the internet, and has always been and, you know, great problem to be dealt with. So, like I said, when the Indian government found out that such a system exists, it then asked, you know, I think it was the CBI or the IB, one of them was, you know, saw that this is, wow, this is a great way to basically find anything that we want to. So, they started asking Facebook to photo DNA every photo, you know, every photo within their systems, so that the IB and the CBI can better investigate it. Everything is also being used in more voluntary forms or more voluntary initiatives. Like I said, there's already quite a large effort, a global effort, including online platforms to kind of tackle child sexual abuse imagery, very important effort. Similarly, there's been this global internet forum to counter terrorism, which also has a bunch of online platforms which work together with governments. They use a similar technique where it's not very clear how certain content gets flagged or how it gets submitted, but they use similar fingerprinting techniques to catch what could potentially be unlawful content. And I'll briefly talk about why this particularly is very pernicious, right, why Robo censorship, while it seems like a problem to the, while it seems like a solution to the problem of scale, why it's incredibly problematic is because while human moderators often kind of, you know, you can grasp the context of particular kinds of speech. When you're simply creating automated systems or even machine learning systems. Machines cannot grasp any kind of context and all speech governance is heavily context dependent, right, if you're, it could be anything I mean if you're say standing in the middle of a field and you shout fire, it may not be illegal, but if you're doing it in a cinema, it may be illegal. Another example from Facebook again was when Facebook decided to take down this very iconic photo from the Vietnam War of a young girl running away from the US Army's Nepal bombing, saying that this, because a young girl was, you know, without clothes, they said that this is child sexual abuse imagery. But in fact it was, you know, it was content that was meant to critique kind of US imperialism or the Vietnam War. A lot of these problems get made particularly in copyright, because within copyright law the kind of, you know, when cultural creations are shared. There's a lot of context dependence on how it's shared and whether that sharing is allowed by the law. Right, so we have terms like fair use and fair dealing under copyright law which allow large amounts of cultural content or copyright content to be shared by people, which is heavily context dependent. So if you're using it in the in the course of teaching, for example, it may be allowed, but if you're simply using it to party, you know, to just like play a song out loud for your neighborhood, it would obviously not be allowed. Unfortunately, an algorithm or a machine cannot determine this and doesn't know how this is happening. So the other problems, of course, from this are that, you know, we are no longer living in simply the era where, you know, the era of like the TV propaganda. I think more and more our online, our online deliberation shape our society or the shape our politics. We've already seen instances like Cambridge Analytica, kind of the, you know, the whole tool of, you know, the gamut of political persuasion being used to affect our political choices, as well as our social choices. So the increase in hate speech, you know, online, that can almost directly be tracked to like the increase in rising rises of violence against minorities. So essentially, the kind of rules that these platforms make then very directly affect our society very directly affect us as individuals. But there are no, there's been no effort to think about this in terms of how to democratically participate in it or how to democratically regulate these kind of platforms. Right. The choices that we make about speech about our communities and, you know, what we condemn or what we condone are often choices that are made through constitutions and through written laws. Right. And then, which are then adjudicated in a case by case basis by judges, so that they can, again, you know, be open, be transparent, be accountable. And so that everybody can democratically agree on a shared set of rules by which we all seek to abide. Unfortunately, platforms don't, you know, have pushed themselves out as these private entities. And as private entities, they don't technically have any constitutional requirement to abide by democratic participation or abide by constitutional rules. So they can censor freely, and they do censor freely. They can promote content that they want freely. And it's very difficult to, you know, in the absence or in a legal void, it's very difficult to ask platforms to behave in a specific manner with with respect to any kind of content. And we see that this is not, you know, as the specific problems of kind of network speech or massively network speech increase, we're seeing that we're asking more and more of platforms. And we're delegating more and more responsibility over the public sphere and over crucial decisions about what speech should be allowed in the public to these platforms and to their inherently opaque and non transparent and unaccountable practices. So, you know, the problems of fake news are kind of like how, you know, what is truthful and what is not truthful. That's been given over to platforms problems of privacy about how much privacy, can I reasonably expect online. You know the landmark case and this is the Google, Google versus costeja case in the EU, where basically the highest code of the European Union delegated the decision about the right to be forgotten to Google. They essentially said that, you know, if somebody comes up to you with an ask to be de indexed from Google, you need to deliberate on whether they, whether public interest is more important, or whether their privacy is more important. And instead of a quote giving you that direction it's now someone within Google making up these rules. So, essentially, like I said, you know, because this is now emerging as the most important avenue or the most important kind of way for public sphere for the public sphere to emerge or for democratic decisions and consequential decisions to emerge. These are questions that should ideally be undertaken by courts and parliaments and not by boardroom executives. However, In India, we really haven't done much to deal with this problem. We really haven't done much to kind of talk about democratic control over online speech. And this kind of stems from and I'll give a brief history of how platform speech is regulated in India to basically conclude that it's not. And it starts with the concept of intermediary liability and it actually goes back to a very old decision. Some of you may be aware of Avnish Bajaj, the director or the owner of Bazi.com. And Bazi.com has started selling pornographic material online, which was declared to be obscene material by the courts. And this was prior to the enactment of the new IT Act. And in that time, because Avnish Bajaj was the head of Bazi and Bazi was selling pornographic content, the court declared that Avnish Bajaj needs to be arrested for this. In the aftermath of that, India incorporated these rules of intermediary safe harbor, which is kind of almost now a global rule which came up through international deliberations and has been there for a long time in US and European law as well, which basically says that platforms which essentially allow two people to connect to each other are simply intermediaries. And as intermediaries, they don't need to be, they should not be liable for the kind of speeches or the kind of connections that people are making with each other. So Facebook as a platform as an intermediary is only facilitating communication between two people. And therefore, as an intermediary should not legally be liable. And this is an important rule, right? Because if you think about it, even though Facebook is moderating content and has responsibility over content of some sort, it is not the person generating or creating that content. So the forms of liability definitely differ. And this came up in a context where in the absence of intermediary safe harbor protections, you know, online platforms could very easily be bullied into censorship by anybody, by governments particularly, or by anybody with a high social standing saying that I'm going to sue you if you don't take these rules down. So this kind of helped maintain a modicum of protection, innovation and, you know, free speech online. Ultimately, this question of the limits of safe harbor online went to the Supreme Court in the Shreya single decision, which you might know, where the Supreme Court has struck down Section 66A of the IT Act. In that, the Supreme Court modified the language of Section 79 to make it even more difficult for anybody to interfere in or to hold platforms liable for the kind of speech that they've been hosting. So in Shreya single versus UI, just reading out the portion, they said that platforms have to receive actual, have to have actual knowledge of the unlawful content and then fail to do anything about it in order for them to even think about being liable for that content in the end. Right. So if I say something, if I have say something hateful on Facebook, they will only be liable if the person who is affected can manage to get a government order, or a court degree, which says that this is unlawful content and Facebook must take it down. In theory, this is a great rule. It means that it's a deliberation that's made at least with some modicum of kind of judicial oversight or with some independent oversight. Unfortunately, what it's resulted in is being practically unworkable, because you cannot expect millions of people who have grouses about being attacked online, particularly vulnerable minorities and marginalized communities to start going to courts or start going to the government. Kashmiris or queer people are not going to start approaching courts because they've been censored by Facebook. So, or rather, because they've been attacked on Facebook and want that speech censored or want to be protected. So this is kind of what, you know, what the status quo is. There are also certain kinds of guidelines that need to be followed under the law. They're very vague guidelines. They don't really say much about what a platform should or should not do. And but recently the government in 2018 has tried to change those intermediary guidelines. So under the law, and the thing that I most particularly want to point out is that they want every platform to adopt automated filters to proactively identify unlawful speech. Now, this is incredibly vague phrasing, right? They basically, you know, what constitutes unlawful information or content and how our platforms are expected to make this decision has not been given under these guidelines, which basically means that any and all, you know, responsibility for flagging unlawful speech goes to the platform, a platform, which if it does not flag unlawful speech can potentially be persecuted and find crores of rupees. What that essentially means is that it's definitely going to lead to over blocking definitely going to lead to a system of over filtration of lawful content, which is very, very scary. The other thing that this has led to the share single standard and the absence of kind of agency or, you know, legal oversight about content moderation is that there's that, you know, various courts have stepped in and established their own models of how content moderation should work, which is kind of identical to both share single as well as section 79 of the act. So in Sabu Matthew George, the Supreme Court came up with a doctrine of auto block saying that Google must auto block any ad, which is contrary to the natal sex determination laws. Similarly, in reprajuwala again in the Supreme Court, the Supreme Court said that, you know, there needs to be a committee, which will, you know, comprise a bunch of independent people as well as platforms, and they will decide rules of automated automatically taking down violence against women and sexual imagery, which violates sexual imagery against women. Similarly, there's been a lot of executive arbitrariness in how these rules are applied. So when the Indian elections are going on last year, and the problems of fake news and hate speech or proliferating the election commission of India tied up with Facebook and Google, and then issued them executive orders, we still don't know how those orders will come up with. We don't know what these rules were that the ECI was complying with. We simply have to go with, you know, the news reports or kind of the ECI's word that was acting in good faith, and it was being transparent. So how do we finally go about solving this? What is the way out of this quagmire? Like I said, I don't think, you know, the issue is, or at least at least the point of the stock is not to kind of give an active solution about, you know, how do we solve privatized censorship by platforms and how do we make social media better, but to kind of recognize that these are the problems and these are the focus areas in which things can be improved. So if the problem is currently the fact that you're swinging between government censorship and platform, private censorship, is there a middle ground and can we improve something in that respect? And I think there's a lot to be improved before we start considering things like automated, you know, Robo filters are like the Indian government has been doing. And the way forward to this is to increase user agency or, you know, not not users but increase the agency of the communities and individuals participating in online speech. And how do you do that? You make your rules more transparent to people. You make yourself more accountable to people. So you release information to the public or to researchers or to affected people about how you're doing engaging in these content moderation practices and give justifications for why you're engaging in these content moderation practices. You have accountability, which means that if I have a problem with how Facebook has censored my speech or Twitter has censored my speech, I can approach somebody, a person ideally who can, you know, assess the evidence and then overturn a wrong decision and apply a clear standard of rules. These are not, you know, they seem like small asks but translating them again into the question of scale and into the question of very, very different context is difficult. And of course there are limits of self-regulation. Different platforms are trying, you know, different ways of self-regulating themselves. One major effort has been the recent Facebook oversight board where they've basically created a Supreme Court through which certain cases of online content moderation will be filtered up to a committee of 40 individuals who will deliberate amongst themselves and come up with, you know, almost a constitutional rule for Facebook to follow throughout all of its practices. Whether this will work, you know, again, taking into account all of the various contexts in which Facebook operates and the scale at which it operates, it remains to be seen. And whether it, you know, what happens when it conflicts with Facebook's business logic, all of these are left to be seen. But what I want to talk about or what I want to endorse more is kind of, you know, taking more democratic control of the online public sphere, ideally through laws, which doesn't necessarily mean that we give the government more power. I think there are tactical and strategic ways for communities to engage with, you know, to engage with online platforms to come up with rules that benefit them and which empower them. But yeah, ideally all of this should be, you know, done through the rule of law. It should be parliamentary. It should have the intervention of the judiciary as well. And there are some laws and some jurisdictions that are already thinking about this. I think, you know, the Network Enforcement Act in Germany, which was initially kind of cited for being overly censored, has actually turned out to be, turned out to draw a different kind of balance. Whether you think that that's a better balance depends on what your views about online safety and online censorship are. It's led to an increase in censorship, but possibly also an increase in kind of, you know, the persecution of online minorities or marginalized communities, and led to a decrease in hate speech. So that's a balance that I think, you know, whatever, wherever the line we draw it, it should be a balance that's drawn democratically and through kind of participatory mechanisms. Yeah, and finally, I just want to give some resources and also, you know, talk about the question about why mass don't end up replacing Twitter. The resources I think some of these are some of these are work that has been done by co-fellows at the Mozilla Foundation with me, and they've done some fantastic work tracking online content moderation, responding to it and, you know, giving tools for people to kind of participate in it. Silence Online is a repository of how, you know, content moderation has gone bad. The Heroku app is, Science Online is made by Laylzara. Heroku by Emi who is another fellow who is basically given tools to understand how hate speech has been proliferating and kind of track it and respond to it. Ranking Digital Rights and Santa Clara principles are two important kind of self-regulatory works which do a little bit of name and shaming and advocacy around how platforms can improve. And finally, you know, this is the last link is an interesting link about how decentralized social networks can work, right? And the problem, of course, which I wanted to point out is that when everybody left Twitter to join Mastodon, what they didn't realize that is that, you know, the problem of platform censorship is a problem that's not simply a technical problem. It's a problem that is a problem of, you know, it's a social problem. It's a problem about how you form communities and how you think about online communities. And Twitter is a way in which, you know, it's come across as a community at scale where we prioritize a massive scale over kind of interpersonal relations or smaller networks. So perhaps Mastodon can replace Twitter to the extent that it can allow you to connect to your smaller networks and create different kinds of communities. But it will never replace Twitter in doing what Twitter does, right? So, yeah, to basically say that, you know, people kind of didn't realize that once you move away from one kind of community, you need to build and sustain a different kind of community on Mastodon, on your decentralized network. So perhaps the problem is not simply technological, but it's about what kind of values we want to create and thinking more deeply about these as we set about to create new platforms. And yeah, I think I've gone fairly above time, but I'll hand it over to Borla. Okay, cool. So far, we have not gotten any questions on YouTube or on Zoom yet. So I'm going to start the ball rolling over here. Speaking of Mastodon, because you ended kind of over there. Mastodon had this weird moment when it started gaining traction, where some of the American servers started noticing problems in the servers in Japan as engaging in content that was obscene phonographic, according to their social and cultural interpretation. And the way Mastodon served work for this particular group, it's the same way that Discord servers kind of set their own rules. Mastodon servers also ended up setting their own rules. And what was interesting for us, they asked them to kind of mind their own business because it was their own social petty dish, so their rules kind of work. And I don't think a lot of people understood how the dynamics kind of work, because this was kind of new in the way it was playing out. So when you had mentioned how Facebook has a centralized or the universal, like, orthopedics in terms of moderating content, how do you see this kind of playing out? Because the question that I was kind of arriving to is, do you have moderation principles that are there as a result of the way digital platforms exist? Do you see bleed-throughs happening in physical spaces in terms of speech? In terms of sensitivity, people asking to be censored and things like that? Yeah, that's really interesting. I can't think of any explicit examples about how that may be happening. But one thing that comes to mind is kind of how certain kinds of institutional or legal logics have been exported through platforms. Because when we think about the kind of social or legal rules that we follow regarding speech, for example, India has fairly strong protections against hate speech. But the concept of hate speech, more or less, doesn't exist under First Amendment US law. It's fairly difficult to think about hate speech under US law. And Jeremy Walton has kind of made a very strong case about why US law should recognize and counter hate speech. Something that to a large extent, the Indian Constitution and courts have recognized. But the platforms are largely operating within this First Amendment sphere. At least before governments started jumping in and trying to regulate platforms that they have now. They kind of operating as they did within the US, mostly American legal logic. They started exporting those rules, which is why I think a lot of the design choices that they made, a lot of the kind of, you know, Twitter's initial push towards exporting democracy in Arab Spring, or even, you know, Facebook's like, we promote connections and we promote hate speech. I think that the moment they started realizing that these are like important concerns are when they started having offices or kind of having, you know, taking into account what their users or what their communities in the rest of the world thought about this. So yeah, I think, I don't know if that answered your question, but I definitely think that kind of it, it changed the, you know, the logics of how communities think about speech all around the world by exporting those logics from one legal or institutional framework on to another. Reggie from Zoom has an interesting question. Are there any online platforms that deal with complaint redressal for online disputes? Now, most of the large online platforms already have their complaint redressal mechanisms. They don't work very well, granted. So it is, again, like I said, I don't think there's one platform which necessarily streamlines all of this. There are some platforms that you can use to kind of as a researcher or as an affected individual get more information. I think the Lumen database is a great one for this because it keeps track of takedown requests from across the world. It's a database that was initially created by Google and then now is hosted at Harvard University. But yeah, I think at least Twitter and Facebook and Reddit kind of have, you know, at least they'll give you the option of reporting any grievances to some kind of mechanism. The problem, of course, is that you don't know how long it's going to take to respond. You don't know what the rules are that are being applied, nor do you know how those rules are being applied, which is why, you know, I often get really absurd things like I have a habit of, you know, going on Twitter and, you know, tagging, right-wing, like hate speech. And then Facebook, you know, Twitter comes back to me and says, yes, this is hate speech. It violates our community guidelines. But when I check that speech is still online, so I don't know what action has been taken against them. Yeah, this is going to kind of been a question from Reggie's question. So it's an interesting question in terms of the possibility of time. So a lot of people don't either have the time or the wherewithal to kind of interpret EULA or an existing moderation kind of document or a playbook that is generally available on each of these digital platforms. And Reggie kind of follows it up by asking, do we need one? Because then it would kind of act as this point where you go to as opposed to when you're either blocked or deplatformed, you generally seek, requires to someone who knows this, but they're not doing it for time. And this kind of puts people into a dilemma. Do you think or do you know of examples where people have tried to create something like that? Again, so I think the problem is one of legal accountability. It's not like platforms don't know that they're doing this. It's that they're kind of deliberately not doing it because they don't have to and because they consider themselves as like these private freedoms which operate purely in the private realm of contract law and don't need to be accountable to people. So I think, so if you go back to, for example, the kind of structure that the next DG has created, the German law, it has created, it has made the incentive mechanisms for all of these platforms to be more accountable to have grievance address and mechanism established in the law and to have clear timelines for responding to these mechanisms. And I think that's kind of what works. Ultimately, you need governments to tell platforms like you need to comply with the law because simply relying upon their commercial logic doesn't seem to be working. And I don't see how you can persuade them unless there's a similar kind of mass exodus. And because of the very unique kind of logic of platform and how they operate, it's kind of difficult to envisage like a mass exodus without systematic change or like the law interfering and breaking up big platforms. So it's a loop question. Like I don't see, I think that regularly intervention is almost definitely necessary and perhaps like a simple, you know, like a technical or a business intervention may not be enough. Are you trying to say that? So this is what is kind of confusing. So if there is a chilling effect on a platform, it would make logical and business sense to kind of prevent that chilling effect by kind of addressing it. Are you saying that there has not been a case of a chilling effect around something like censorship or hate speech or fake news or aggressive or weak moderation for people to kind of act in on this in a timely bid? Have there been modifications in the way these platforms work? Yeah, so they have and like I mentioned, you know, there have of course been lots of complaints about how this has happened. There's been lots of advocacy around, you know, the kind of arbitrariness that platforms have been engaged in. So which is why like go back to the quality labs report that I pointed out to or go back to these initiatives on this slide and companies have responded to that in their own manner. So, you know, Facebook's oversight board is almost like a, you know, a re-constitutionalization of how Facebook's speech practices operate, right? Like they're trying to make it so that it works under the framework of similar to the rule of law, but like a private framework that way. But they still haven't kind of given, you know, clear instances of what one affected user can do. So this is about coming up with principles for how Facebook can work. But it's not a regulation in terms of, you know, if I submit a complaint to Facebook, it will come back to me in 24 hours and say what it has done to address that complaint. That doesn't exist anywhere. You can't hold a platform accountable in case there's no action being taken. Exactly. So there's no law which can kind of prevent a platform from censoring. Right. So like a government, for example, is constitutionally prohibited from censoring me in many circumstances. You would call it a restriction on your freedom of speech. But that freedom of speech doesn't necessarily translate onto private platforms simply because they have pushed themselves as private platform. And I think a re-conceptualization is therefore necessary to think about them as public utilities or start thinking about them as regular, regulatable entities, right? Not necessarily ones that should be liable for content, but ones that should be accountable for how that content is moderated by them. Devika on YouTube has asked, my question is that given the share single case law, how do you justify a slow motor removal of information by Facebook which violates its community guidelines? Or when a user reports some content? Yeah, same question. Thanks for that question. But yeah, it's basically that's the problem, right? You can't do a large degree. See, there are a few ways of doing this. And in the US, for example, people have tried to sue Facebook or social media platforms for wrongly taking down or wrongly censoring them. But go back and read the terms of use. You as a user or as an individual or a community have to follow Facebook's community guidelines. Facebook, on the other hand, does not have to follow anything. It's not accountable to you in any manner. It's ultimately only accountable to shareholders, right? And this is a problem that of course we've also faced in the privacy discourse where we talked about, you know, what is the responsibility that platforms have? And then there's been this reconceptualization of platforms as fiduciaries as holding duties of care. And in fact, you know, this duty of care approach, which is kind of a very old approach for the industrial era under tort law has kind has been mooted already. So the UK in its online harms right paper has thought about this duty of adopting a duty of care approach, which is a more common law, you know, legal approach towards defining how platforms work. But it's then it swings the other way. The duty of care, because it's not set out as clear standards or principles to follow. Can we really interpret it? Yeah, then at least the chilling effect. So again, you know, it's about that. It's about where you want to draw that boundary. It's about kind of thinking start to think about those laws and start to think about where you want to draw this one. But yeah, to answer the basic question, you can try to sue them if you can make a creative interpretation of contract law. But I don't think it works. This, this part of the presentation where you were talking about how the platforms kind of interpret themselves. There's one aspect how they sell themselves to potential first time users. So Facebook initially starts out as a social network platform. A lot of the content is still mundane content. But you do have another role that both Facebook, Twitter, and now other platforms as well when push comes to shove they become sites that post news. And I think there have been many instances where these platforms, Facebook and Instagram and Twitter function more or less like open platforms, WhatsApp kind of functions like a closed one. Each of them have their own kind of agency in terms of allowing news to disseminate. But when it comes to speech or fake news, as in the presentation you describe, they were trying to hold each of those guys. Each of those governments were trying to hold WhatsApp complicit in its role of spreading fake news. They kind of step back from being disseminators of news. They say that we are a social networking platform. This is an interesting manifestation of what you just described where you can't hold them accountable in the way you hold individuals. Do you see this? I think you described it as the intermediary law. Do you see another way of interpreting this because they seem to be making use of this loophole where they can both function as an individual with individual laws but not be held accountable as being an individual or being held as a platform or a space law. Like the metaphor seemed really like all over the place. And it's entirely deliberate. So the choice of framing themselves as platforms or as intermediaries while continuing to deploy or have power over the network and modify the network or recommend stuff. Is a choice that platforms make and sometimes it leads them into more difficult territory but mostly it's done to their benefit. So they're allowed to modify the law to their benefit. And because most laws only have this very blanket charter saying that if you're an intermediary which means that if you're not yourself generating the content. And I mean in India technically the law uses the term modify but that term has not been expanded upon. So it's possible for a lawyer to go to court and say that Facebook is modifying content when it shares news with me. But I don't know if it would work. Definitely you know there's a great article called the politics of platforms are written by Dalit and Gillespie which kind of unpacks this. I did unpacks the kind of duplicity that's involved in framing something as a platform while actually being you know the core like logic that's responsible for governing speech. You know platform seems like you know it's raising me up to a certain level so that people can see me. As an individual as a community whatever what it doesn't say is that you know it's the platform is also kind of like shaking you around moving you turning you upside down. So another question was around the human labor and this was kind of edging towards what is happening now. So on one side you have an algorithmic way of moderating content and we see this as you described around takedowns of content that might be violating or being perceived as a violation of IP. But on the other hand when you're trying to block either speech or obscene content in this specific time period during a pandemic. The kind of prompts that come back is that we are pressed in terms of the workforce behind this and not sure how it's kind of playing out. So in terms of IP protection there seems to be algorithmic moderation but in terms of speech content there seems to be human labor involved. Is there any particular reason why it's been laid out this way and why it's not more proportional. So yeah there's a few reasons and just to give some context you're absolutely right you know to kind of make this more COVID relevant I guess. What you've seen since the beginning of the pandemic is that companies social media companies have laid off you know rather stopped work for content moderators which has made them rely more heavily on automated content. And now you have even more complaints coming out both from the IP side as well as general kind of speech governance content because automated content is being used in other areas as well. And it kind of depends, yeah so I mean you have to go back a little bit to the kind of technology that's at the heart of this. Most of it is fingerprinting technology which is a very simple kind of direct matching technology. So it will match like something a bit of text or a bit of image that has already been uploaded to a server and has been determined to be illegal. Now this works well for say a video or a piece of cultural content which has been copyrighted because those are by definition unique. Cultural like copyrighted content has to be a creative and unique piece of work. So it's easy to kind of match that. It's easy to fingerprint and then create a database of that and match that. So even though it's not, it doesn't mean that it's necessarily correct to do so because like I said the context you know even in IP depends on how you're using a particular kind of image or work. But it is technically much more possible. On the other hand hate speech you know it's and there's a fascinating paper in fact about somebody who tried to create a machine learning model to determine racial hate speech. Now what ended up happening was that you know African American communities and other communities use racial language in very different ways. Right. And it's very important like you know it's very important for those communities to use that language. But the machine learning model what it ended up doing was that it took the speech that it determined was hateful according to one standard and applied it to the other standard applied it to how you were using you know kind of racialized speech. And that's where context becomes really important right you need to know how people are communicating with each other and you know what is the community. What are kind of like the expectations within the community in which people are speaking to each other. So which is why if you're not using a fingerprinting technique which is not context dependent and one way one one way in which this you know this is work fantastically well. Or at least as you know there have been systems created around the use or the take down of child sexual abuse imagery. Well it's not work fantastically well but at least like you know it has resulted in a lot of take downs of child sexual abuse imagery. And that's because we all universally almost agree that like this content is harmful and is bad. Disagreements there is no context specificity about it. You know if you if you're uploading an image which has been used in this context there's no other justification for it. On the other hand even in something as you know if you take extremist content for example it could be used in the context of news reportage. It could be used in the context of a parody it could be used for war reporting and all of these are really important you know public functions. Whether you're using an ISIS video to make a critical commentary about ISIS or whether you're using it to spread extremist speech is not very clear. Even when something like that gets fingerprinted and you know is matched using that fingerprinting technique it becomes incredibly difficult to you know. Kind of input that context and make the correct decision. But in terms of content take down. Especially if like you said you've got videos with audio content. They're they're not particularly looking at context that it's only just fingerprint matching. It's very weird how unevenly context plays out in terms of looking at content. I have one more question. I don't think there is any new questions on either of the sides. And it's kind of playing off the earlier one. In China in terms of moderation and censorship they have certain interesting ways of keyword filtering. And they use human labor to a much much greater extent to kind of make sure that descent is served in different ways. And to kind of play along the game of escalation of cat and mouse you have different people trying to subword keyword filters. And they often use this using cultural markers like children's rhymes and like that. So the grass mud horse is an interesting example of kind of trying to evade both machine filtering and human filtering because it's difficult to understand how whether the thing is being said in jest because in Mandarin it's based on the reflection of the language. Do you think that we'll reach that kind of a point because it seems like all it requires is a law in place for things to kind of start moving. Do you see opposition to that kind of. That's really interesting. I would prefer if we didn't go down that path. I really like I think it's really interesting the kind of the you know the Winnie the Pooh example for how Chinese users refer to. I mean yeah ultimately that's the thing that is the wicked problem can also use a work in favor of users of communities. I don't I don't know like I mean I. Do you see these conversations since the data protection bill started coming into like public discourse. People kind of being worried that these kind of bills would become acts and then it would be part of legal frameworks to a platform and conversation of platform. Yeah definitely I think there's been a lot of pushback against the intermediary guidelines rules as well. A lot of advocacy about pushing back particularly against the automated take down stuff. And I think yeah I mean another thing that ties into kind of your question is also one of the justifications that platforms and governments give for not making the rules of engagement very clear. So they say that if we release rules which say that you know free Kashmir is seditious people will find ways to game it right and this is a common kind of something that's like often talked about in terms of algorithmic explainability or transparency where people are scared of gamification of the specific you know very specific rules that are applied on online platforms and they're not sure how they can then start to counter that. Reggie is kind of added into the conversation by saying that we have loads of languages unlike China with my friend and Cantonese it will be a nightmare. Yeah absolutely. I mean the languages by themselves are a nightmare for NLP to kind of make sense. And you add a layer of culture with the way the languages use it makes it even more difficult to kind of moderate. And then you have different manifestations of all of these conversations. Cool. I think we are almost at the end of time and I didn't see any more questions. Okay. Do you want to conclude this? No I mean I already concluded in my presentation but yeah I definitely hope that you know people kind of see this and are able to understand more about what action they can take in terms of responding to policy initiatives by the government and kind of participating more actively or understanding more actively how platforms are shaping online. Okay. Thank you Dvij. Awesome. Thank you Reggie and Divya for asking the question. Devika for asking the question. Sorry. And all the other participants who have logged in. Thank you. Sign in.