 I'm so happy to be introducing this next talk and the next speaker. The next talk is something so close to me as a feminist. It is about major internet platforms deciding what we can and cannot see. And it's about how we can take back power as users and do certain interventions that can help us fight for our rights. It's called should I stay or should I go. And it's by the amazing Gillian C. York. Gillian works for EFF. She mainly works on censorship at quasi public platforms and lives in Berlin. Over to you. Thank you. Hello, everyone. Thank you. Oh, you guys are so nice. Cool. So I am talking about this idea of should I stay or should I go. And in fact, at the end, I might even ask you to take a vote, whether I should stay or go. Because I've been, if you've seen any of my talks, you know that if there's one thing that I hate, it's the idea that these authoritarian private companies get to decide what we see and what we can say. And this is what I've been working on for the past seven or eight years. And I'm still fighting it. And it's not really changing. So I'm going to invite you to join me in this fight. So I've started with a meme, might be familiar to you. One does not simply quit Facebook, because it's true. And this is what I want to start addressing right now. Over the years, I've gone to so many events where I've spoken about this idea of whether we should quit Facebook, try to reform it, whatever. And there are two responses that I get from the audience. And I'd like you to think, as I say this, think maybe this is you. This could be you. The first one is, well, if you don't like it, why don't you just leave? Now, I love this answer, and I'm going to get to the root of why that's such a stupid answer in a little while. And then the second comment that I get, this one's not stupid. And I want to give it its due. But the second one is, well, why don't we just build decentralized networks instead? And I don't think that's a bad idea. Don't get me wrong. I'm not here to rail against decentralization. But it's not a solution for everyone. And that's the other thing that I want to talk about, and why. But first, let's start here. So this is a sticker that you'll see on people's laptops and stuck everywhere from the folks at La Quadrilla du Net. And I generally agree with it. I love this sticker when I first saw it. I have one on my own, one of my machines anyway. Do not feed the Google. And then at the bottom, it says, decentralize all data and communications, software, Libra, N10 encryption. Now, that's a message that I can get behind, except for the part where it says all communications. Because I don't think that we should decentralize all communications. I'll get there in just a second. I just want to show you some funny memes first, if that's OK. So the internet is actually full of jokes about quitting Facebook and how hard it is to quit Facebook. Now, this talk is not just about Facebook. Hi, my friends at Facebook, if you're listening to the live stream, it's not just about Facebook. It's about Google and Instagram and Facebook and Twitter and all of these intermediary quasi-public spaces that we use every day and why they're so problematic. But I'm going to focus on Facebook because it's a case study. And because I just spend a lot of time researching it. So again, really popular memes all over the internet. I like this one. This is actually the Google result for when you search for a meme about quitting Facebook. There are so many that I couldn't even choose. I wanted to just spend my whole time sharing them all with you. But the truth is that in practice, it's difficult to quit Facebook. I've talked to a lot of people. Actually, let me just ask. How many people here raise your hand if you use Facebook still? That's a lot. OK, good. Good. So you know what I'm talking about? It's hard to leave. I'll tell you why I don't leave. I don't leave for two reasons. One, because I like to party. And you know where the party invites happen? Facebook. Nobody invites you if you're not on Facebook. Just ask me. I don't invite people who aren't on Facebook anymore because I'm lazy. So that's one, laziness. Number two, the other reason I don't leave? I have lived in three or four countries at this point. I have been to more than 50 countries. I spend a lot of my time traveling and meeting new people. And what's the easiest way to keep them all in one place? Facebook. But of course, I'm not suggesting that we all stay on Facebook. To be clear, I don't work for Facebook. I don't even think they like me very much at this point. But I'm not here to say stay on Facebook, but I am here to say that Facebook is here to stay. And because of that, we need to start working toward these things and thinking about reasonable options rather than just saying to users that they should leave. Because if we do that, then we're leaving them behind. And we're not offering them a reasonable replacement for when they do leave Facebook. The second point is, who's to say that the next alternative won't be even worse? So let me talk you through some of the problems that I've seen. Maybe you'll agree with some of them, maybe not all of them. And then I want you to help me answer this question. So first point, decentralization. What about it? Oh, there we go. So what about decentralization? Now, I'm guessing that I'm not talking to people who have never heard of this concept. I'm guessing most of you are aware of the idea of decentralized social networks and perhaps on board with the concept. And again, I want to say I'm on board with it too, but just not as a full replacement for the existing centralized social networks that we have. So we've got the pros, you get better ownership and control over your data, potentially more privacy depending on who and how your social network is being run. You can have more authenticity and verification by using blockchain in your social network and have potentially better security, again, depending on who's running your decentralized social network. And of course, potentially greater freedom of expression. You could also have the opposite, but we'll save that for another day. But then for the average user, I think the struggle's real. Decentralized social networks are not easy. They're not easy to run. If they're using blockchain, they're not very environmentally friendly, and that's another conversation too, but ask me later if you don't know what I mean. And the barrier to entry for many users is really high. When I've talked to some of my friends who are not technologists, I'm not a technologist either, but not technologists, not part of this culture, they don't know what I mean. And then I realize that I'm inept at trying to explain to them in terms that they understand what a decentralized network even is. So if we can't get past that step yet, we've already failed. Now that's not to say we shouldn't keep building. And again, I'm going to encourage you to do that, build better, build smarter, and build so that more average users can join you. But the point of it all to me is that there are setbacks, there are negatives to this idea. And I think that that's where I can remind you that we're also, we're not the average user. Me, you, we're not the average user. So let's look at who they are. So Facebook right now has more than 1.2 billion daily active users. YouTube has more than 1 billion users in total, and Twitter boasts 313 million monthly active users. That's a lot of people. And I'm guessing there's a lot of overlap in these statistics. Now I'm wondering, do these numbers remind you of anything? Kind of, yeah, pretty much. Those numbers almost just about match up with the populations of China, India, and the United States. I'll just flip back and forth again so you can see that. 1.28, 1 billion, 313 million. And there you go. So these companies and their user populations have gotten as big as three of the world's biggest countries. That's how many people we're talking about. So when we say if you don't like it, just leave, we're forgetting about all of the people who have nothing to do with these communities, who don't know what the word decentralization means, who don't even necessarily know what centralization means, who are then telling them, oh, just leave. Oh, come join this. Oh, this is better. And it may be better. But it also, I think this is where we need to consider the role that we play in the fact that we might be the elites on that one. So a reminder, we're not the average user, and I believe that we have the responsibility to fix institutions, as they may be, that most of the public relies on. So I wanna talk through some of the problems, and here's where I'm gonna go right straight back to Facebook. So what are the issues with Facebook? Again, a lot of you are on it. How many of you are on it but frustrated with it? Okay, so you'll feel me in this next slide. So first, let's talk a little about algorithms since they're in the title of my talk. So first, this is a great quote from Zaynep Tifekshi, it was from a New York Times piece. Software giants would like us to believe that their algorithms are objective and neutral, so they can avoid responsibility for their enormous power as gatekeepers while maintaining as large an audience as possible. This is absolutely true. Number one, these companies, what's their primary motive? Money, profit, exactly. So your Facebooks, your Googles, your Twitters, whatever human rights language they use, whatever lofty missions that they have about wanting to connect the world and make it a better place, ultimately, their real motivator is their bottom line. And so as they implement these things, they're trying to bring costs down, bring revenue up. And so as they apply algorithms to these decisions, they're doing it with money in mind, not with you in mind. We'll talk about some of the examples of how they've misused algorithms, but I think that this is really important and the part that I think is really important is this idea of software giants and sometimes developers liking us to believe that algorithms are objective and neutral. I wanna challenge that idea entirely. Algorithms are not objective and neutral because when you build code, you're putting all of the biases that you have brought with you into the world, all of your life experience, whether you're a man, a woman, whether you come from another background, whether you're from a different country, whether you're from here, whatever you are, you're bringing your personal experience into that and it's not possible to have an entirely neutral technology. And so when Facebook says, oh, but we're just a neutral platform, we're not a gatekeeper, I'm gonna call bullshit. That's it, bias isn't inherently negative and we can also bring positive biases into that. I'm gonna come back to that a little bit. While we're talking about algorithms, I realize that there are two much smarter people on this subject giving talks tomorrow. I'm just gonna leave that up while I continue. You should make a note of these two talks, very good ones, on algorithms. So what are some of the ways that algorithms are being used by companies like Facebook? First, you've got your feed. This is what Claudio's talk is about tomorrow. He's presenting a project called Facebook Tracking Exposed about what Facebook's actually presenting to you in your feed, as well as some other stuff. So when you go to Facebook, when you go to your homepage and you get your feed, what you're getting is a carefully, algorithmically curated version of the world. Now, there's been some really interesting studies done on this, particularly around things like positive messaging versus negative messaging. In fact, some fairly terrifying and academic studies that probably shouldn't have passed the IRB. But you're also getting what Facebook thinks is important in the world at the time. So last year, during the elections in the United States, a lot of conservatives were really angry because they felt that Facebook was censoring conservative news and that's not really true. So that was actually a manually done thing. But the fact is that Facebook does add values to different types of content. This one's more important. This one's less important. A baby photo from someone who just had a child probably almost always shows up in your feed. If your friend just had a baby, you're gonna know about it. If they just had a birthday, you might know about it. And if they're sad, you probably are less likely to know about it. So there are all of these different ways that your feed is manipulated. Then on the other hand, you have the ways in which your content is actually manipulated after the fact. So the way that these companies conduct censorship, and again, more detail in the policy in just a minute, but the way that they conduct censorship is twofold. One is this concept of community policing or snitching. And the other one is through the use of algorithms to either identify or identify and take down content. And we're starting to see this more and more. YouTube is employing algorithms to identify the worst of the worst when it comes to terrorism. That's how they phrased it. They say that algorithms are better, or their algorithms are better at identifying terrorist content than humans are, which I find really surprising and not true at all. But those are just two of the ways that they're being used. And we also see them being used in backend programming decisions. So I'm just gonna skip ahead so that I don't go over time. So then let's look at the policy just a little bit. This is Facebook's stated mission. Their mission is to give people the power to share and make the world more open and connected, and make money, of course. But when we look at the way that these policies and these mission statements are crafted, we also have to look at who is crafting them. Just like when we look at who's building the AI algorithms, who's building the code that gives you your Facebook feed or that decides what's taken down, these are all areas of human input. These are all people who are bringing their own history and their own biases to the table when they do this. And so who are Facebook's top executives? Well, I can tell you that five of them studied at Harvard, four of them went to law school, and no, sorry, all but one of them went to law school, four of them went to elite institutions for both undergraduate and graduate degrees, and all but one of them are white. That's fun and really interesting and also an extraordinary lack of diversity. I'll get back to that. So let's just give one example of a policy. I'm sorry that this came out so ugly, it's my fault. So Facebook's community standards are their non-legalese document that decides what you can and cannot say, and that was crafted by that group of elites that I just mentioned. But they've gotten so complicated that Facebook actually has started issuing repeated clarifying documents to explain their policies over and over again. And why is that happening? Well, because frankly, it's one group of people in one board room making these decisions without any real input from the public. So just to give you an example, this is their nudity policy, and here's the second part of it. I'm gonna leave that up for just a second so you can try to read it, but tell me that this company is not tying itself in knots, trying to explain, oh, this is okay, you can have like a little bit of side boob, but no butt, and you can't have this, and you can't have that. They're getting really complex in trying to explain a really simple concept. And so I think we need to think about how ridiculous these policies are and what we can do about them as well. And well, why are things this way? Part of me feels that it's because of the lack of diversity at the top of these companies. Some of you may have seen today, Google, there was a document that came out of Google from a Google employee. It was leaked to Gizmodo, and it was this Google employee's 10-page rant about how their diversity hiring practices are bad and how we shouldn't focus so much on making sure that we have women and minorities because it's really about thought diversity and blah, blah, blah, no it isn't. Like I said, we bring our experiences to the table, whether we're putting them into code or into policy or into writing. We bring our human experiences to the table and that's why diversity is important. Also, this whole idea that diversity of thought is more important and this has been popularized by a Facebook board member, Peter Thiel, who actually says that diversity of thought is much more important than other kinds of diversity. But he's one of Facebook's, you know, all members of the all white board of directors. I'm sure that there's lots of diversity of thought at Harvard Law School, but I think that that diversity of thought really comes from our backgrounds. And so having a diverse group of people in appearance is also about diversity of thought and that's something that we forget when we try to make these into a binary. So if you can't read those numbers by the way, that's 35% women at Facebook overall, 65% men. But then when you talk about tech jobs and who's actually programming, it goes to 19% women and 81% men. Feel free to raise your hand if you think that those are acceptable numbers. But we can talk about that after. I will look forward to fiercely debating it with all of you. And it's not just about gender, it's also about a lot of other things. Facebook's employees in the United States, the United States is about 13% black, just census statistics, I didn't make them up. Facebook's employees are only 3% black. The US population, Hispanic population, again, census data, about 20%. Facebook's 5%. These numbers don't reflect reality in society. This, a little bit too, is who's making the decisions about what we can see. Now, some of it, like I said, is algorithmic. Algorithms might be identifying these content and telling people to take it down, telling the centralized workers to do this. But who are the people who actually make that final call? It looks something like this. They're contract workers. They get paid about $10 to $15 an hour. They're usually in places like the Philippines. They don't have full benefits or full psychological assistance. And they're not always really well trained to do this. So in the debate between what's better, an algorithm or a human being, consider these facts as well. Another point of that to me is that this is actually a labor and a human rights issue as well. Why should someone else have to see the beheading video and take it down so that you don't have to? We're talking about a real disparity and privilege here in that, oh, I don't wanna see these things in my feed, whether it's nudity or sexuality or beheading video. So let's pay somebody in the Philippines to look at it for me. So how does this work exactly? How well is it working, this whole content moderation problem? So like I said, Google says that AI is better than humans. It's scrubbing extremist YouTube content. And I say, show me the results. But the implementation of these policies overall is pretty bad. In this case, we have an example where Facebook has been repeatedly taking down the use of the word dyke, which as some of you may know is a word that lesbians reclaimed quite a long time ago and used freely amongst themselves and with others much of the time. And that using that word is not inherently bad, just like using most words is not inherently bad. But Facebook has repeatedly made this mistake in their content moderation by actually censoring people who are engaged in either counter speech or reclaimed speech. And here's just another example of that. In this case, somebody had posted a death threat to Facebook that they had gotten something, a letter. She wrote out the text of the letter and explained, you know, somebody called my children this. And Facebook took that content down and suspended her because she had shared horrible speech that someone else had said. Now, what we don't know is whether it is humans or algorithms making those decisions. We do know that both are pretty bad at it and that there are ways that we can improve that and get to that in just a second. I'm running out of time here. We do know that, but we don't know whether in this particular situation, whether it was an algorithm that failed to identify the context around her use of this word or whether it was a human being that failed to identify the context around the use of this word. But I guess I would ask at this point, do we really want these authoritarian companies making these decisions for us at all? And if so, or if we can't change that, then what can we change? I see this as kind of a classic dilemma. Do we tear down old institutions? And somebody laughed at me earlier for calling Facebook an old institution. I know, I know it's only like 12, 13 years old. But at this point, it is the most robust, biggest, oldest social network that, well, biggest and oldest together, let's say. So it's a classic dilemma. Do we tear it down or start fresh or do we build new institutions? Or can we do both? And I think that that's what I was trying to get earlier, I'm not here to discourage anyone from building alternative platforms or social networks or whatever you want to call them. I think the more, the better. But at the same time, I think that we have to remember all of the vulnerable users who are going to stay on these platforms and what we can do to help make these platforms a better place for them. To that end, this is my project. It's called onlinecensorship.org and we founded it a few years ago after we saw a lot of censorship around a specific issue that was happening in the Middle East. And we originally came up with the idea as a way to get reports from users who had experienced content takedowns on these social networks or account suspensions and get that data and use it in academic reporting and in other ways. But as time has gone on, we've actually seen more of an advocacy role for ourselves and so I want to tell you just a little bit about what I think we should do and what my organization is doing to fix that. So what do we actually want when it comes to changing these social networks? I've narrowed it down to five things. I'm not saying these are the only five, but these are the five that I see as the most important. The first one, due process. If we're going to have these authoritarian companies making decisions about what speech is acceptable, then we need to remember that, well, they're not authoritarian companies or not the same thing as authoritarian governments. You're not, well, hopefully, not going to get put in jail for fighting back against them for protesting. And in fact, because their bottom line is money, we have the opportunity to influence shareholders and others who can help change these policies. So in instituting due process into the platform, what I'm asking for is that Facebook and Google and Twitter, et cetera, et cetera, that they all provide a means by which users in every situation can appeal the decision that's been made about their content. We know that these companies make unfair decisions every day. We know that they apologize for it repeatedly, but they never seem to do anything to make it easier for users to fix. And that most of the people who get their content put back up or their accounts put back up are people like me who have a really, really loud voice and are not gonna stop screaming at Facebook or actual celebrities too. So due process appeals for users. The second one is transparency. For me, this doesn't just mean transparency around content takedowns, which is something that has been progressing over the years and will continue to do so, but also transparency about a couple of other things. One, about the algorithms that are in your feed. They are the algorithms that determine what is displayed in your feed. We can have transparency around that. It's possible. It's possible that Facebook could also give you more control over how you utilize those algorithms. I'm not saying nobody's asked for it, but there has not yet been a concerted campaign for that kind of algorithmic transparency and I think it's time. The other transparency that we could have, of course, is what the company or what platforms collect and do with our data. And that's something that we're demanding as well. Data portability. I think this one is probably pretty clear to everyone here, but I'm happy to answer questions after if you have them. Diversity. Again, I don't believe in diversity just for diversity's sake. I believe in diversity because I believe that when you have a diverse group of people making decisions, then you have more input from their backgrounds, from their histories, their personal experiences into why things are as they are. So when I talk about diversity at Facebook, the fact that their board is all white, the top executives at the company making policy are not only mostly white, but also almost all went to Harvard? That's not diversity. We can do better. And then the last one, of course, is adherence to human rights norms. And this is where I believe that we need to bring the free expression argument back in. I noticed there aren't that many talks about censorship compared to four years ago and there's a lot of reasons for that. Most authoritarian and democratic governments have kind of moved on from the censorship bandwagon to the surveillance one. You don't hear about censorship, internet censorship so much anymore. Turkey, of course, and you've got the recent examples from a couple of other places, but it's not always at the top of our minds. And I think we need to bring it back and recognize that these companies have more control over our speech than most governments these days. I know that sounds dramatic, but I assure you it's true and I'm happy to debate that point as well. And so as we bring, as we consider this, I think it's important to bring back the norms, the human rights norms that we've all decided on as cultures previously. Let's bring back the Universal Declaration of Human Rights and ask Facebook to be accountable to it. Why do they ban nudity from their platform? Probably because of some parochial American ideas about what's appropriate, but I don't find that acceptable when we're talking about universal human rights and speech. And I think that there are a lot of other things we can do. Like I said, these are my top five. I would also love to see platforms consider things like non-ad-based revenue models, but that'll have to be another talk for another day. So without further ado, if you wanna chat with me, I'm gonna be here to answer some questions right now, but you can also find me probably around the bar tonight or by tweeting at me, et cetera, et cetera. So thank you very much. I appreciate you being here and I hope that that was useful to you. We're now open for questions. So if you have any questions, please use that mic. Hello. Hello. I have a mix between a comment and a suggestion. One is not just data portability, but what matters is the federation. One of the risks I'm feeling when a new social network get proposed is that nobody's there. And this is a problem that has been addressed by take-offs in the 90s. When they force that if you are the monopolist take-off, you have to permit to your customer to bring the phone number in other take-off. Imagine if we can bring our Facebook profile to other social network like L or diaspora. And if you're communicating across the same social network, we are using that infrastructure. If you're communicating to a user in Facebook, we are just making sure infrastructure collaborate. That is something I don't know if there is some progress, but I really wish to see it. And that is my question. And the other point is when I deal with activists, they were using Facebook mostly to say, we have to communicate to the public. Yes, but you are also the person that do not want to be tracked. The only compromise we found was separating the account. Your personal life, use an account, your political life, another account. These at least can split the risk. I don't know if there is some kind of safety measure, or if you are aware of some kind of training that is done for that. I'll answer the second one first. I'm not. I'd love to talk more about that with you. Go to his talk tomorrow. That was one of the ones I put up. To your first point, no. And in fact, when it comes to Federation, I would actually say that these companies are moving backwards. So I also sometimes train digital security to train people on using more secure tools. And a couple of years ago, you could use Google Talk or GChat, I guess we call it, us old timers. And Facebook Messenger, or when it used to just be Facebook Chat, you could actually use those with OTR a couple of years ago, and both companies have pulled that so you can no longer do so. I find that really unfortunate. So I think that they're actually moving away from Federation in all cases. I don't know about other examples from outside of the corporate world. I'm so focused on these corporate platforms at the moment that I haven't admittedly have not looked into all of the other options that exist outside of them. That'll be my next phase. Do we have any more? Sorry, it took me a while to walk. So essentially, I really like the five demands you made. I think it synthesizes a lot of policy conversations that we're having nowadays. So I have two questions, and one follows up with the other. The first one is if you could say this is only related to Facebook and not to other intermediaries like from the top of my mind, another powerful Google. And if it also is applicable to Google, then when we're talking about algorithmic transparency, wouldn't that be, is it applicable the same way in which we demand transparency from the state, one and two, how would that be a risk in terms of algorithmic manipulation by end users? That's a really good point. So I would say that the first thing is yes, this does apply to other companies as well. I chose Facebook as a case study because I've been doing a lot of research on it over the past year, so forgive me for that one. But yeah, absolutely applies to other companies. I'm not sure how to answer the second question because I think that what I mean about algorithmic transparency in this case is for Facebook to show what the input, or Facebook or company X, to show what the input values are, to give transparency around the data sets that they're using, I don't know how much that opens it to manipulation. I'd like to talk more about that because it's not something I've thought of before. But I think that this is something where, even if it's not, say, opening the black box entirely, because they're not, I think that the company's resistance to that is purely financial, but there are other ways that they could make the algorithms more user-friendly and more catered to specific needs, and one of the examples that I've heard about is a friend of mine has a severe phobia, and she suggested that, for example, if Facebook has the ability to identify and hash beheading videos, as they're doing, by the way, they are using algorithms to identify and hash images and put them in a shared database that they share with Twitter and a couple of other companies, if they're doing that, then why couldn't they do the same for, like, I don't know, snakes, where you could then say, okay, I want to get all of this out of my feed? So I know that that's not exactly to the point, but I think that giving users a lot more control and sharing more transparently how those decisions are made is my first, that would be the first step. But I'm happy to talk more about that later. I'm not sure I answered it correctly. Hi, Jillian, Andy, EQE on Twitter. Thanks for the talk. I was really interested in what you said about alternative revenue models, and I'm curious if you have any thoughts on that, or if you know anybody who's working in that area, how do we financially make a platform that is sustainable in the long term without taking BGC, without going to add revenue exclusively? Oh, I'm afraid I don't have answers, which is why I only made it as a throwaway point. I mean, I know, right? I think it's interesting, though. Like, how many people here would pay to use Facebook? Oh, so it's actually not that many. How many of you would pay to use, let's not call it Facebook, let's call it something that looks like Facebook, but is a little bit different? Basebook? Okay, all right, that's still less than half the room. So maybe that's not the best model. I'm not sure what it is. I often fail to think about the financial incentives, and it's not great for my advocacy, because I'm like, burn it all down. But yes, I think it's a really good question. I apologize for not having more answers. I do have a couple names for you, but not off the top of my head. I will look and send them. A tweet. Yeah, perfect. Hello. So Federation is quite important, and Open Standards are quite important for many of us here. And I've been observing also the same thing that you said is happening, the backtracking from Open Standards. For example, on Twitter, five years ago, you were able to follow somebody by an RSS feed. Every account had an RSS feed. I mess with this. Everybody, no, no, I'm using RSS. It's live, Twitter. Okay, never mind, I'm just a bad person. So this is happening, obviously. Another thing is that there's obviously the network effect of people being only on the closed social networks and not being able to even show up on the federated open ones, which means that other people are like, why would I go to the other network? Why would I go to, I don't know, Mastodon or Diaspora or whatever, there's quote, unquote, nobody there. While at the same time, for example, Mastodon, there's a great discussion happening exactly in this topic right now, that you were talking about, where you have different instances with different rules, with different diverse kind of, I don't know, ecosystems of users, and there's a discussion of how the whole federated network should handle this. Some admins are blocking some instances, some admins are blocking all of the instances and only allowing some others to federate. This is something that is happening there. I think this is something that you might be interested in because this is happening live. So I guess my question is, perhaps we could have you on the open side of things. Thank you. Could I be on the open side? Wait, is that a question, though? The question is, could we expect at some point that you would set up an account on the open side of things? Oh, yes. And I don't want to say this just in case it doesn't happen, but there's a reason that I've waited. I'll tell you later. I can't, yes. But yes, the answer is yes. Hello. Hello. So, you mentioned briefly the way in which parochial American standards values kind of got encoded into social networks. And I see this both with Facebook, but also to a larger extent on YouTube, as a thing that very much adheres to American cultural norms. And being from a Scandinavianish place, our cultural norms are somewhat different. And cultural norms are different from place to place. What I'm wondering is, do you know of anything that has been, like whether anybody has studied the effects that the encoding of these cultural norms into social media has had on cultures where the norms are different? Just to say, are the big social media giants essentially Americanizing everybody? So, great question. Another thing that I can't say on stage, but I'll tell you later, but I have not seen academic research on that point. So I'm doing some work on that very question right now, the impact of cultural norms that these companies are imposing on other parts of the world. Let me just say, because I think I failed to say it in my talk, I was a little nervous about time, one of the things here is that Facebook, YouTube, Twitter to a lesser degree, they're a little bit different, but Facebook and YouTube are both really good examples of this where violence, for large part, is completely allowed on these platforms. Cartoon violence, regular violence, human violence, all of it, sexual violence in some cases, totally allowed, and the only exceptions are things like glorification of terrorism. Whereas nudity is completely the opposite, which is to say that it's almost not allowed, and the only exceptions are things like Michelangelo's David. What's interesting about that to me is how very opposite of my life in Germany it is. So I'm American, obviously. I've been living in Germany for three years, and culturally there, it's completely the opposite. Violence, not so much. Sex everywhere, nudity everywhere, I love it. Love it, but yeah, I mean, the companies have absolutely exported these values, and it almost becomes kind of a form of digital colonialism, which again, I talk more and more in hours about, but I think that we do have to consider the ways that these companies are impacting cultural norms in other places, and another thing too, for governments like yours, for example, not that you're a member of it or anything, for governments like that, have you ever seen a government block Facebook? Because I can tell you it's happened three times total, and the thing that really pisses me off about this is that basically what you have is governments like Sweden, which I know that there have been private meetings between Facebook and members of the Swedish government about the fact that nudity is not allowed, there have been feminists within the government who have argued for this, and if they don't like it, why don't they block the site? Why don't they show these companies who's boss and block the website? I'm not advocating for censorship, but you're going to have it either way. I would love to see what would happen if Iceland was like, oh, we're just going to block Facebook. You asked for that one. So speaking, I loved your talk, particularly the beginning, and if you are looking for academic research, there was just a paper published called Systemizing Decentralization and Privacy, The Long Road in Privacy Enhancing Technologies, which overviews the last 250 papers over the last 20 years on decentralization. Yes, send it to me, please. It ends up, you're basically, you know, you're correct, it's really hard, you lose privacy, you gain security, sometimes it involves advanced crypto, complicated, really hard. But my question is actually on the machine learning and the transparency. So I used to work, I have a PhD in machine learning, worked at Yahoo, kind of familiar with how it works, and I like your demand, I think it's a good idea, like why not? Maybe there's some security privacy problems, but it could work out probably. But my concern would be, and I'm not sure this would be the case, that if you actually did get the input vectors, all of the parameters into the algorithms, they would be completely indecipherable. I know, I worked on these things, they're indecipherable. And the data sets are dynamic, so they're typically updated all the time. It's very hard to even determine, per given instance, it's not like we use this data set, it's not an academic paper in industry. So I'm not sure if that's actually a possible thing. If you put a lawyer in a room with that code, they wouldn't understand it, which is unfortunate, because the code itself is really understandable. The algorithms are simple. It's the data sets which are complicated. That's fair. And so I was wondering if you have any thoughts on that, ways you could extend to deal with that sort of real-world problem. Yeah, no, absolutely. I mean, let me shoot this back to you and tell me if it makes sense. But also, I mean, I think that it's, well, yeah. Anyway, so I have been at EFF for about six years now, and lots of open source evangelists in my world, lots of people telling me that I should use open source, use open source, use open source. And of course, I agree for a number of reasons. But one of the reasons that drives me nuts is when people say, oh, well, you can just inspect the code. Oh, can I, really, with my skill set? No, I'm a sociologist. I can't inspect the code. I have to trust someone else to do that. And so in this case, I almost wonder if maybe what I'm asking for is not actually putting those vectors out there for me, but these companies telling us, these companies actually having a conversation with us, maybe it really is that basic, because they're not even willing to do that. They're not willing to talk about the absolute, the real basics. They won't talk about why they make the decisions that they make, why they take this down, how they implement their own rules. So you're right, maybe I'm asking for something too complicated, but I also wonder if there's any kind of parallel there, because I think a lot of people have heard that argument over the years for open source and been, like, me, unconvinced and almost felt kind of like, oh, okay, yeah, let me just look at the code. I guess what I was saying is I think you could actually read most machine learning algorithms are really simple compared to most Linux or networking, whatever, stuff. It's the data sets, so the racism and the data sets reflect a racist world. It's kind of a hard problem. You can't make a fair data set, because there isn't a non-racist world right now, which is sad, but that's where we're at. We have to fight for that in some revolutionary manner. I mean, it's true, and I'm not sure if it's possible, so yeah, I mean, you can critique algorithms, but you could also critique the larger system which produces the world which makes the fucked up data. Absolutely, no, totally agree. Hi, I'm a Google Plus user, which didn't get much mention here. Oh, you're the one. No, we exist. Sorry, sorry, sorry, Google. Yeah, I do have a Facebook account, which I originally got mostly because everybody at work had a Facebook account and I should get one too, so I got one. And lots of family turned out to be on Facebook, and I noticed that almost nobody was posting anything interesting there. So I mostly ignored it, except for a few family members which I turned on notifications because when the rare time when they did post something, it was something interesting. Eventually, I was felt really forced to use Facebook when two close friends of mine moved to Indonesia for three years, and they used Facebook as their primary platform for communication, so I turned notifications on for that. I only, in Facebook, I really only checked the notifications by now. Where I really interacted with people, where I post my own stuff, that's Google Plus. But I noticed that Google keeps undermining their own platform and increasingly people who used to love Google Plus as the better social network are starting to leave it and moving to Facebook because apparently Facebook fucks up less than Google Plus does. It's hard to believe, but I believe you. Yeah, but anyway, all of this recently got me thinking. I would really like to have my own social network site which could interact with all the others. Maybe I should start on completely different track first. I've been doing freelance work for a number of banks recently, and I learned that there's a new law in Europe that requires banks, banks, they have their own websites, they have their own mobile app where you can do your banking stuff. But there's apparently a European law that requires banks to open up that information to other parties if they use the right authorization, et cetera. So one bank can now have an app which shows data from your bank account at another bank, and you can manage that account through the app from a bank that where the account doesn't belong. And I would like to have the same law for social networks. So you can have a social network that's where you can check your feed on another social network and interact with it without having to use that. So I could use Google Plus to read the post from my friends at Facebook or on Twitter or wherever they are. Or we could start a new social network that can interact with all of those, but on our own terms, without their advertising, without their filters, et cetera, that kind of stuff. But it requires a pretty broad international law that means countries need to be aware of this and want to enforce open interactions here. Oh, that's basically my idea. Cool, yeah, no, no, I think that's a really interesting idea. And I didn't get a lot into law. I could totally chat with some of you about that another time. But I think that this is one of the other questions, too, is what kind of regulation is possible of these companies. I know it was in the, I was just trying to get the talk nice and concise, failed to bring that up. But yeah, really good point, thank you. Awesome, well thanks everyone. Oh, wait, one more thing. Should I stay or should I go? Raise your hand if I should, no, applaud if you think I should stay on Facebook. Okay, and if you think I should go, damn. All right, thank you guys, I'll have to quit now.