 The next talk will be tackling how social media companies are creating a global morality standard through content regulations. This would be presented by two persons standing here, the digital right advocate Matthew Stendard and the writer and activist Jillian C. York. Please give them a warm applause. Hello, everybody. I hope you all had a great Congress. Thank you for being here today. No, we're almost wrapped up with the Congress. But yeah, we appreciate you being here. My name is Matthew Stendard. I am a communication strategist, creative director and digital rights advocate focusing on privacy, social media censorship, and freedom of press and expression. And I'm Jillian York, and I work at the Electronic Frontier Foundation, where I work on privacy and free expression issues as well as a few other things. And I'm based in Berlin. Thank you. For Berlin? Awesome. Hope to see you some of you there. Great. So today we're going to be talking about SIN in the time of technology, and what we mean by that is the way in which corporations, particularly content platforms and social media platforms, are driving morality and our perception of it. We've got three key takeaways to start off with. The first is that social media companies have an unparalleled amount of influence over our modern communications. This we know, I think this is probably something everyone in this room can agree on. These companies also play a huge role in shaping our global outlook on morality and what constitutes it. So the ways in which we perceive different imagery, different speech is being increasingly defined by the regulations that these platforms put upon us on our daily activities on them. And third, they are entirely undemocratic. They're beholden to shareholders and governments, but not at all to the public, not to me, not to you. Rarely do they listen to us, and when they do, there has to be a fairly exceptional amount of public pressure on them. And so that's our starting point. That's what we want to kick off with. I'll pass the mic to Matthew. So thinking about these three takeaways, I'm going to bring it to kind of top level for a moment. To introduce an idea today, which some people have talked about, but the idea of the rise of the techno class. So probably a lot of people in this room have followed the negotiations leaked and parked and then in full by WikiLeaks about the Trans-Pacific Partnership, the TPP. What some people have mentioned during this debate is the idea of a corporate capture. A world now in which the corporations are maturing to the extent in which they can now sue governments, that the multinational reach of many corporations are larger than that, the diplomatic reach of countries. And with this, social media platforms being part of this, that these social media companies now are going to have the capacity to influence not only cultures, but people within cultures and how they communicate with people inside their culture and communicate globally. So as activists and technologists, I would like to propose at least that we start thinking about and beyond the product and service offerings of today's social media companies and start looking ahead to two, five, ten years down the road in which these companies may have social media services and social media service offerings which are indistinguishable from today's ISPs and telecos and other things. And this is really to say that social media is moving past the era of the walled garden into neo-empires. So one of the things that's on this slide are some headlines about different delivery mechanisms in which that social media companies and also people like Elon Musk are looking to roll out an almost leapfrog, if not completely leapfrog, the existing technologies of terrestrial broadcasting, fiber optics, these sort of things. So now we're looking at a world in which that Facebook is now going to have drones, Google is looking into balloons, and other people are looking into low-orbit satellites to be able to provide directly to the end consumer, to the user, to the handset, the content which flows through these networks. So one of the first things I believe that we're going to see in this field is free basics. Facebook has a service, it was launched as internet.org and now has been rebranded to free basics. And why this is interesting is that, well, in one hand, free basics is a free service that it's trying to get people that are not on the internet now to use a Facebook's window to the world. It has maybe a couple dozen sites that are accessible. It runs over the data networks for countries. Reliance, the telecommunications company in India is one of the larger telegraphs, but not the largest. There's a lot of pressure that Facebook is putting on the government of India right now to be able to have this service offered across the country. One of the ways that this is problematic is because a limited number of websites flow through this, that people that get exposed to free basic, this might be their first time seeing the internet in some cases. Just an example that's interesting to think about is a lion born into a zoo. Perhaps evolution and other things may have this lion dream, perhaps of running wild on the plains of Africa. But at the same time, it will never know that world. Facebook free basic users knowing Facebook's view to the window of the internet may not all jump over to a full data package on their ISP, and many people may be stuck in Facebook's window to the world. In other words, we've reached an era where these companies have, as I've said, unprecedented control over our daily communications, both the information that we can access and the speech and imagery that we can express to the world and to each other. So the postings and pages and friend requests of millions of politically active users as well have helped to make Mark Zuckerberg and his colleagues, as well as the people at Google and Twitter and all of these other fine companies, extremely rich. And yet, we're pushing back. In this case, I've got a great quote from Rebecca McKinnon, where she refers to Facebook as Facebookistan, and I think that that is an apt example of what we're looking at. These are corporations, but they're not beholden at all to the public, as we know. And instead, they've kind of turned into these quasi-dictatorships that dictate precisely how we behave on them. I also wanted to throw this one up to talk a little bit about the global speech norm. This is from Ben Wagner, who's written a number of pieces on this, but who kind of coined the concept of a global speech standard, which is what these companies have begun and are increasingly imposing upon us. This global speech standard is essentially catering to everyone in the world, trying to make every user in every country and every government happy. But as a result, it has kind of tampered down free speech to this very basic level that makes both the governments of, let's say, the United States and Germany happy, as well as the governments of countries like Saudi Arabia. Therefore, we're looking at really kind of the lowest common denominator when it comes to some types of speech and this sort of flat gray standard when it comes to others. So, as Julian just mentioned, we have countries in play. Facebook is another organization. Social media companies are trying to pivot and play within an international field, but let's just take for a moment a look at the scale and scope and size of these social media companies. So, I just pulled some figures from the Internet and with some latest census information, we have China, 1.37 billion people, India, 1.25 billion people, 2.2 billion individuals and practitioners of Islam and Christianity, but now we have Facebook with, according to their statistics, 1.5 billion active monthly users. Their statistics, I'm sure many people here would like to dispute these numbers, but at the same time, where these platforms are now large. I mean, larger than, well, not larger than some of religions, but Facebook has more monthly active users than China or India have citizens. So, we're not talking about, you know, basement startups. We're now talking about companies with the size and scale to be able to really be influential in a larger institutional way. So, Magna Carta. We have the US Constitution, the Declaration of Human Rights, the Treaty of Masters, the Bible, the Koran. These are time-tested, at least long-standing principle documents that place upon their constituents, whether it be citizens or spiritual adherents, a certain code of conduct. Facebook, as Julie mentioned, is non-democratic. Facebook's terms and standards were written by a small group of individuals with a few compelling interests in mind, but we are now talking about 1.5 billion people on a monthly basis that are subservient to a terms of service in which they had no input on. So to pivot from there and bring it back to spirituality, why is this important? Well, spiritual morality has always been a place for religion. Religion has a monopoly on the soul, you could say. Religion is a set of rules in which if you obey, you are able to not go to hell, or have an afterlife reincarnated, whatever the religious practice may be. Civil morality is quite interesting in the sense that the sovereign state as a top-level institution has the ability to put into place a series of statutes and regulations, the violation of which can send you to jail. Another interesting note is that the state also has a monopoly on the use of sanctioned violence. Say that the official actors of the state are able to do things in which the citizens of that state may not. And if we take a look at this concept of digital morality, I spoke about earlier with services like Free Basic introducing new individuals to the internet, well, by a violation of the terms of service, you can be excluded from these massive global networks. And really, Facebook is actively trying to create, if not a monopoly, a semi-monopoly on global connectivity in a lot of ways. So what drives Facebook? And this is a few things. One is a protectionistic legal framework. The control of copyright violations is something that a lot of platforms stomped out pretty early. They don't want to be sued by the IRA or the MPAA. And so there was mechanisms in which copyrighted material was able to be taken out of the platform. They also are limit potential competition. And I think this is quite interesting in the sense that they've shown this in two ways. One, they've purchased rival or potential competitors. You see this with Instagram being bought by Facebook. But Facebook has also demonstrated the ability or the willingness to censor certain content. Sue, TSU.co, is a new social site and mentions and links to this platform were deleted or not allowed on Facebook. So even using Facebook as a platform to talk about another platform was not allowed. And then a third component is the operation on a global scale. It's not only the size, but it's the company. It's also about the global reach. So Facebook maintains offices around the world as other social media companies do. They engage in public diplomacy. And they also operate in many countries and many languages. So just to take it to companies like Facebook for a moment. If we're looking at economics, you have the traditional multinationals 20th century in the United States. And so the goal for the end user of these products was consumption. This is changing now. Facebook is looking to capture more and more parts of the supply chain and as a service provider, as a content moderator and responsible for negotiating and educating the content disputes. At the end of the day, users are really the product. It's not for us Facebook users, the platform. It's really for advertisers. And we take a hierarchy of the platform. We have the corporation, advertisers, and then users kind of at the fringes. So let's get into the nitty-gritty a little bit about what content moderation on these platforms actually looks like. So I've put up two headlines from Adrian Chen, a journalist who wrote these for Gawker and Wired, respectively. They're both a couple years old. But what he did was he looked into, he investigated who was moderating the content on these platforms. And what he found and accused these companies of is outsourcing their content moderation to low-paid workers in developing countries. In this case, he found the first article, I think Morocco was the country, and I'm going to show a slide from that in a bit of what those content moderators worked with. And the second article talked a lot about the use of workers in the Philippines for this purpose. We know that these workers are probably low-paid. We know that they're given very, very minimal amount of, a minimal time frame to look at the content that they're being presented. So here's how it basically works across platforms with small differences. I post something, and I'll show you some great examples of things I posted later. I post something, and if I post it to my friends only, my friends can then report it to the company. If I post it publicly, anybody who can see it or who's a user of the product can report it to the company. Once a piece of content is reported, a content moderator then looks at it, and within that very small time frame, we're talking half a second to two seconds probably, based on the investigative research that's been done by a number of people, they have to decide if this content fits the terms of service or not. Now, most of these companies have a legalistic terms of service as well as a set of community guidelines or community standards, which are clearer to the user, but they're still often very vague. And so I want to get into a couple of examples that show that. Oh, and this is just... This slide is one of the examples that I gave. You can't see it very well, so I won't leave it up for too long. But that was what content moderators at this outsourced company, Odesk, were allegedly using to moderate content on Facebook. This next photo contains nudity. So... I think everyone probably knows who this is and has seen this photo. Yes? No? Okay. Kim Kardashian. And this photo allegedly broke the internet. It was a photo taken for Paper Magazine. It was posted widely on the web and it was seen by many, many people. Now, this photograph definitely violates Facebook's terms of service. But Kim Kardashian is really famous and makes a lot of money, so in most instances, as far as I could tell, this photo was totally fine on Facebook. Now let's talk about those rules a little bit. Facebook says that they restrict nudity unless it is art. So they do make an exception for art, which may be why they allowed that image of Kim Kardashian's behind to stay up. Art is defined by the individual. And yet, at the same time, they make clear that, let's say, a photograph of Michelangelo's David or a photograph of another piece of art in a museum would be perfectly acceptable, whereas your sort of average nudity maybe probably is not going to be allowed to remain on the platform. They also note that they restrict the display of nudity to ensure that their global or because their global community may be sensitive to this type of content, particularly because of their cultural background or age. So this is Facebook in their community standards telling you explicitly that they are toning down free speech to make everyone happy. This is another photograph. Germans, particularly, I'm interested, is everyone familiar with the show The Golden Girls? Okay, quite a few. This is the actress Bea Arthur and this is from a painting from 1991 by John Curran of her. It's unclear whether or not she sat for the painting. It's a wonderful image. It's beautiful. It's a very beautiful portrait of her. But I posted it on Facebook several times in a week. I encouraged my friends to report it. And in fact, Facebook found this to not be art. Sorry. Another image. This is by a Canadian artist called Ruby Kaur. She posted a series of images in which she was menstruating. She was trying to essentially describe the normality of this, the fact that this is something that all women go through. Or most women go through, rather. And as a result, Instagram denied unclear on the reasons. They told her that it violated the terms of service, but weren't exactly clear as to why. And finally, this is another one. This was by an artist friend of mine. I'm afraid that I have completely blanked on who did this particular piece. But what it was is they took famous works of nude art and had sex workers pose in the same poses as the pieces of art. I thought it was a really cool project. But Google Plus did not find that to be a really cool Plot Art project. And because of their guidelines on nudity, they banned it. This is a cat. I just want to make sure you're awake. It was totally allowed. So, I'm going to go into the problems of content moderation. I'm going to go ahead and say that we also have a major diversity problem at these companies. These statistics are facts. These are from all of these companies themselves. They put out diversity reports recently, or as I like to call them, diversity reports. And they showed that so the statistics are a little bit different because they only capture data on ethnicity or nationality in their U.S. offices. Just because of how those standards are sort of odd all over the world. So the first stats refer to their global staff. The second ones in each line refer to their U.S. staff. But as you can see, these companies are largely made of white men, which is probably not surprising. But it is a problem. Now, why is it a problem? Particularly when you're talking about policy teams, the people who build policies and regulations have an inherent bias. We all have an inherent bias. But what we've seen in this is really a bias of sort of the American style of prudeness. Nudity is not allowed, but it's not allowed. And the reason why we're talking about extreme violence, extreme violence, as long as it's fictional is totally okay. And that's generally how these platforms operate. And so I think that when we ensure that there is diversity in the teams creating both our tools, our technology, and our policies, then we can ensure that diverse world views are brought into that creation process and that the policies are therefore more just. So what can we do about this problem when we're not going to be able to make the decisions that we're bringing about in the future? So I think the first thing that we have to do is say, oh, I'm a technologist, as activist, as whomever you might identify as. I mean, the first one, I think a lot of the technologists are going to agree with, develop decentralized networks. We need to work toward that ideal because these companies are not getting any smaller. I'm not going to necessarily go out and say that they're too big to fail, of service takedowns. Now, I'm not a huge fan of transparency for the sake of transparency. I think that these companies have been putting out transparency reports for a long time that show what countries ask them to take down content or hand over user data. But we've seen those transparency reports to be incredibly flawed already. And so in pushing for greater transparency around terms of service takedowns, that's only a first step. The third thing is we need to demand that these companies adhere to global speech standards. We already have the Universal Declaration of Human Rights. I don't understand why we need companies to develop their own bespoke rules. And so by demanding that companies adhere to global speech standards, we can ensure that these are places of free expression because it is unrealistic to just tell people to get off Facebook. I can't tell you how many times in the tech community over the years I've heard people say, well, if you don't like it, just leave. That's not a realistic option for many people around the world, and I think we all know that deep down. Thank you. And so the other thing I would say, though, is that public pressure works. We saw last year with Facebook's real name policy, there were a number of drag performers in the San Francisco Bay Area who were kicked off the platform because they were using their performance, their drag names, which is a completely legitimate thing to do just as folks have hacker names or other pseudonyms. But those folks pushed back, they formed a coalition, and they got Facebook to change a little bit. It's not completely there yet, but they're making progress, and I'm hoping that this goes well. And then the last thing is, and this is totally a pitch, throwing that right out there, support projects like ours, which I'm going to throw to Matthew to talk about, onlinecensorship.org, and another project done by the excellent Rebecca McKinnon called Ranking Digital Rights. So just a little bit of thinking outside the box, onlinecensorship.org is a platform that's recently launched. Users can go onto the platform and submit a small questionnaire if their content has been taken down by the platforms. Why we think this is exciting is because right now, as Elaine mentioned, the transparency reports are fundamentally flawed. We are looking to crowdsource information about the ways in which the social media companies, six social media companies, are moderating and taking down content. Because we can't know exactly, there's not accountability and transparency in real time. We're hoping to be able to find trends, both across the kind of content that's been taking down, geographic trends, news-related trends, within sort of self-reported content take down. But it's platforms like these that I think that hopefully will begin to spring up in response for the community to be able to put tools in place that people can be a part of the reporting and transparency initiative. We launched about a month ago and we're hoping to put out our first set of reports around March. And finally, I just want to close with one more quote before we slip into Q&A. And that is just to say that it's reasonable that we press Facebook on these questions of public responsibility while also acknowledging that Facebook cannot be all things to all people. We can demand that their design decisions and user policies be, we don't stand right in front of you, be explicit, thoughtful and open to public deliberation, but, and this is the most important part in my view, the choices that Facebook makes in their design and in their policies are value judgments. This is political, and I know you've heard that in a lot of talks, so have I. But I think we cannot forget that this is all political and we have to address it as such. And for someone, if that means quitting the platform, that's fine too. But I think that we should still understand that our friends, our relatives, our families are using these platforms and that we do owe it to everybody to make them a better place for free expression and privacy. Thank you. Thank you so much. So please, now we have section of Q&A for anyone who has a question. Please use one of the mics on the sides. And I think we have a question from one of our viewers. No, no. Okay. Please proceed. Number one. You just addressed it. I'm sort of, especially after listening to your talk, I'm sort of on the verge of quitting Facebook or starting to, I don't know, yeah, I mean, and I agree. It's a hard decision. I've been on Facebook for, I think, six years now and it is a dispute for me myself. I'm in this very strange position and now I have to kind of decide what to do. Are there any, is there any help for me out there that tells me what would we, I don't know, what, that takes my state and helps me in deciding or I don't know, it's strange? That's such a hard question. I mean, I'll put on my privacy hat for just a second and say what I would say to people when they're making that consideration from a privacy viewpoint because I do think that the implications of privacy on these platforms is often much more severe than those of speech, but this is what I do. So in that case, you know, I think it's really about understanding your threat model. It's understanding what sort of threat you're under when it comes to, you know, the data collection that these companies are undertaking, as well as the censorship, of course, but I think it really is a personal decision and I'm sure that there are, you know, there are great resources out there around digital security and around thinking through those threat model processes and perhaps that could be of help to you for that. I don't know if you want to add. No, I mean, I think it's, it's one of these big toss-ups, right? This is a system in which that many people are connected through even sometimes email addresses, rollover and Facebook and so I think it's the opportunity cost by leaving a platform, what do you have to lose? What do you have to gain? But it's also important to remember that as the snapshot we see a Facebook now, it's not going to get to any, it's probably not going to get better. It's probably going to be more invasive and coming into different parts of our lives. So I think from the security and privacy aspect, it's really just up to the individual. Short follow-up if I'm allowed to, but I don't see the main point for me is not my personal implication. So I'm quite aware that Facebook is a bad thing and I can leave it, but I'm sort of thinking about it's way past the point where we can decide on our own and decide, okay, is it good for me or is it good for my friend or is it good for my mom or for my dad or whatever. We have to think about is Facebook as such, for the society is a good thing as you're addressing. So I think we have to drive this decision-making from one person to a lot, lot, lot of persons. I agree and I'll note, I mean what we're talking about, I agree. What we're talking about in the project that we're working on together is a small piece of the broader issue and I agree that this needs to be tackled from many angles. Okay, we have a question from one of our viewers on the internet, please. Yeah, one of the questions from the internet is, aren't the moderators the real problem who ban everything which they don't really like rather than the providers of the service? Can you please repeat that? The question was if the moderators sometimes volunteers on the problem because they ban everything that they don't like rather than the providers of a certain service? No, I mean I would say that the content moderators, we don't know who they are, so that's part of the issue is we don't know and I've, you know, I've heard many allegations over the years when certain content has been taken down in a certain local or cultural context, particularly in the Arab world, I've heard the accusation that like, oh, those content moderators are pro-CC, the dictator in Egypt, or whatever. I'm not sure how much merit that holds, because like I said, we don't know who they are, but what I would say is that they're not, it doesn't feel like they're given the resources to do their jobs well. So even if they were the best, most neutral people on earth, they're given very little time, probably very little money, and not a whole lot of resources to work with in making those determinations. Thank you. We take a question from Mike Three, please. Test, test, okay. First off, thank you so much for the talk. And I just have a basic question. So it seems logical that Facebook is trying to put out this mantra of protect the children. I can kind of get behind that. And it also seems based on the fact that they have the quote unquote real names policy that they would also expect you to put in a real legal age. So if they're trying to censor things like nudity, why couldn't they simply use things like age as criteria to protect children from nudity while letting everyone else who is above the legal age make their own decision? Do you want to take that? I think it's a few factors. One, it's, I guess on the technical side, what constitutes nudity and in a process way, if it does get flagged as once something is flagged, you have a chance at some boxes to say what sort of content. I could see a system in which that content flagged as nudity, gets referred to a special nudity moderator and then the moderator says, yes, is nudity, then filter all less than legal age or whatever age. But I think it's part of a broader, more systematic approach by Facebook. It's the broad strokes, it's really kind of dictating this digital baseline, this digital morality baseline, and we're saying, no, anybody in the world cannot see this. These are our hard lines and it doesn't matter what age you are, where you reside, this is the box in which we are placing you in and content that falls outside of this box for anybody, regardless of age or origin, that these, this is what we say you can see and anything that falls out to that, you risk having your account suspended. So I think it's a mechanism of control. Thank you so much. I think, unfortunately, we run out of time for questions. I would like to apologize for everyone who's standing. Maybe you have time to discuss that afterwards. Thank you, everyone, and thank you. Thank you.