 Our next speaker tonight is Matthew Stender. Matthew is a technology ethicist, currently a project strategist with online censorship.org. And tonight he's going to be talking to us about some of the different ways in which social networks and other proprietary platforms use the data they've collected about us in intrusive ways. So please give a welcome to Matthew Stender. Hello, everybody. Whoa, whoa, whoa. Hello, everybody. Thank you so much for joining me and everyone here this evening. I know it's getting to that kind of twilight of camp. I hear there's a grappa flowing at the Italian embassy later, so you'll probably find me there after this. My name is Matthew Stender. I am somewhat self-proclaimed, a technology, a tech ethicist. Last year and a half I've been researching around a central theme, and that is what are the cultural and ethical implications of emerging technology? And also what does it mean to be human in the 21st century? I'm really interested in examining the social contract between us and the technology, us as humans, and the technology that we make and how it has become more of the tools and is now becoming part of the way that we create an epistemic reality. That is the information that we have, the knowledge that paints the picture of the world around us is now being very much influenced, nudged, and activated by systems. Tonight I'm going to be talking about proprietary platforms, but also proprietary algorithms as opposed to open-source technology, which we think most people this camp are well versed in, but I think sometimes we forget about what's going on in the proprietary world. What is exactly that we're fighting? What is it exactly that corporations and governments are doing that we should maybe not just be wary of, but can actually take stances against? So I'm going to examine on, I guess we can call it tonight, a hashtag, but it's more than that. It's a theory which I've been working on for a while, and I call it MIMICS. And this is an acronym, initialism, stands for Monitor, Index, Manipulate, Intercept, Sensor, and Silo. I'm going to walk through these letters and talk a little bit about why I think it's important that we put some sort of communication strategy behind proprietaryness and be able to relate it to people that are not so technically versed and have other people understand what it is that these systems are able to do in an untransparent process that affect the way that we live our lives. So once again, my name is Matthew Stinder, and I'm going to talk about how beyond this MIMICS concept, how platforms have really geared themselves up to be against humanity. So there's this saying that art imitates life, but I think more and more of the technology is mimicking reality. So I'm going to start this talk with some kind of an sociological slant, philosophical if you will, and talk about just some basic things. This is kind of how I have come to see the world and tried to also build strategies around how to communicate these things. So I think we're surrounded by so much technology, and we have everything at our fingertips. But the problem is that this technology is not neutral. That a lot of it is capitalist-driven. We have manufactured demand that gets us to voluntarily give away our information, but it's used for profit maximization. All the algorithms, the information that is more and more tailored and curated for us that these platforms think we are, is in a constant state of evolution. Earlier today, I gave a talk on facial recognition technology, and one of the things that interests me about facial recognition technology is one image that we snap, that may go to our iCloud, or Google, or Facebook, that we need advice to see it. We need battery in our phone in order to access this photo. It's not a human-readable image first, only secondly. First is a machine-readable image. And what I find particularly disturbing about this concept is that these images can continuously be churned, that new information, new data, new assumptions, new information can be accumulated on a single image each time that it's scanned inside of a large database. And so we're now dealing with this world that is we have so many dynamic information points that have been taken from us or that we've given to technology companies. And this really does, I believe, change the social relationship that we are able to experience with others, and changes the way that we ourselves are able to self-identify. So this is both an active and a passive influence on both our psychological and our sociological decision-making. The things that I do, if I want to get those likes on that Instagram picture, I may pose in a way. I may go to the place in which that other people go. This is for me to be able to put out a digital image to the world, but also get validation and gratification from others in responses of likes and comments. So Descartes, as his famous saying, but I want to interrogate if this is still true in the 21st century. Just because we think, are we there for? So let's start out with some sociological philosophical concepts. I'm really interested in agency. And I have this idea for the capacity for individuals to act independently and make their own free choices. This is what I'm most concerned about with proprietary technology, that forces are being exerted onto us in a way that we don't realize that our free will, that our free choice, that our agency is being outsourced or nudged upon, confirmation bias. This is something that I also find interesting, that the more and more that we are exposed to proprietary platforms, we can think about it as social platforms now, the more that we're exposed to Facebook or to Instagram, that this does impact our epistemic reality. These are the things in which we see that other people live their lives, ways that other people live their lives. But it also then influences the things that we hold to be true. So we're searching. We're trying to interpret information. But a lot of times, as been shown in studies going back to the 70s, that the things that confirm our pre-existing beliefs are the easiest things for us to understand or choose to be what we consider fact or truth or objective reality. And so this idea of the illusion of control, that we overestimate our ability to control events, that we think that we may actually, this is a longer, is well known for this concept, dating back to the early days of the internet. And I think that even now, it's important for us to think about what degree do we have the ability to shape our world and how much are we just a factor of the world pushing against us. And finally, I think the last concept here, technological determinism, does society, technology drive development of its social structure and cultural values? And if so, what degree of influence does the proprietary technology have? When we don't have transparency, when we can't see inside source codes, when there are black boxes, when the systems that go through that each part of a machine learning algorithm, a neural network, a GAN, whatever it is, all these processes going on, we are not able to see what's inside and track back the outcome to the origin with a clear and meaningful causal roadmap. And surveillance capitalism. I probably, most of y'all are familiar with us, Zuboff's from Harvard's theory of surveillance capitalism. There's a book about it. But I think, I thought this was interesting. I forget if, how it learns the former or current chief economist. But this idea, which the chief economist for Google would say, the ability to take data, to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it, that is gonna be a hugely important skill in the next decades. So this, we can see now that somebody in charge of the monetization, the revenue, the long-term economic prosperity of a company like Google that is more powerful than certain countries, if they're still looking, mapping out the next couple of decades to mine data, to make their money off of our information, I think that we have to step back and say, okay, this is not an isolated phenomenon. We need to dig into this and also find ways to communicate to people that may have never thought that Facebook or somebody, or your search feed, your search results in Google, may change how you see the world. Like Oscar Wilde said, it's very sad nowadays that there's so little useless information. I think this is particularly true in the digital era. So as I was saying, when we're surrounded by these things that confirm our confirmation bias, when we're exposed to things that do not create cognitive dissidents, when every day a feed in which we check consistently gives us similar information, it's sometimes hard to know what lies outside of these bubbles. MIT Technology Review is talking about these things. We see other world publications talking about how Google is the future. Also see how technology companies have been captured by state institutions, and how these smaller companies that are not these global players, but still have found their way into working with governments to a massive amount of information using proprietary systems that then rent due to governments, this is more and more, there's more and more of an extraction industry around data. So this is what I'm gonna talk about this evening. I think that, I don't know, up to this point, this is six examples of things which I feel have captured a bare minimum of the things that I am trying to speak about and have people more aware of. So I'm gonna quickly go through cases of this. This is, I don't know if everybody here may know of all these cases, but also to hopefully inspire that there is kind of a weight around this acronym. So we are constantly being monitored. Our phones are like spies. Our cars have become tracking beacons, right? And so we now find ourselves interacting with many proprietary systems. We find ourselves in Ubers on Facebook while, I don't know, momentarily checking to see if our Amazon package has arrived. But we are monitored all the time and by different forces. So I talked about facial recognition in my last talk, but I think that we're gonna see more and more of the phenomenon that's here in the lower right hand corner that now Google and Facebook are matching online data with offline shopping habits. So going into stores, but also using location services on your phone, being able to see if there's correlation between the ads they served you and your real IRL spending habits. We also are used to people listening. We haven't really thought about it. We're being monitored. Facebook Messenger is now going to try to use some keywords. We've had anybody that's used Gmail for a while, they actually just stopped, but for a long time our emails have been scanned to service targeted ads. Now the US military wants to get up on Twitter. Kind of these firehose APIs at the FBI couldn't get it for awhile, but the CIA couldn't get it, but the FBI was able to get this firehose API that now the world around us, there's real time social media monitoring tools. I don't know if anybody, I'll be able to read the share labs reports on the hacking team metadata. It is an awesome read. It is really an impressive report. It was really one of my favorite reports last year. They were able to go through, and so this is a graph of the CEO of hacking team, his email frequency throughout the day. Just a small data set, but you can really get a picture of people's lives with something as simple as email metadata, even as the headers, right? You're able to see, okay, well, then who are they contacting? What does this look like on an hourly or weekly, and so the sophisticated photos or the sophisticated images of our lives can really tell a large amount about us with not a lot of data, with indexing. Our information is stored, and then our analysis are served, and we're served not what we want, but what they think we want. And I think this is interesting. Once we're talking about these large data sets, how similar do we have to be to someone else in order to get served a similar ad? And I think that the lack of transparency and the nature of proprietary systems in which we're not able to look inside, we can't answer these questions. I don't, what is it that makes me a liberal or a moderate or a conservative? Am I more than the sum of my clicks? Just because I think therefore I am, is this still applicable, or is it therefore I click? I click therefore I am. And this is the relationship. This is the lack of a social contract between people and our technology that I think we have to continue to interrogate. Some of you all probably heard that Roomba was thinking about selling maps of people's homes. This is a Google from last week, April I break the story. This to me is this nexus in which that we are no longer just online beings that we're now online even if we're offline. There are some interesting uses to some of these things that looking for spikes in search data for malaria or for symptoms. So Google is not doing it actually, is not just using data of do I have malaria. It's isolating the search terms that appear around malaria and then using them as indicators to look for outbreaks. Nifty, but also a bit invasive. Manipulation. Claudio Becno, who gave a talk yesterday and I believe tomorrow as launching a new project, Facebook.tracking.expose. And what that project is doing is it's a browser plugin and it's looking at post IDs on Facebook and saying how often does this one post show up when you log on to Facebook? This is sort of data collection that I think that we can really help transparent the efforts. We don't really know. We just see a feed and we don't know if we've seen it before. I never really look at ads and I really don't know if an ad has appeared there before if that ad is there every day. But with tools we're able to now see more and more about the ways in which that algorithms prioritize content and manipulate our feeds in a way that is not chronological but what they think we want. So there was a really interesting case. I've seen it in Robinson and they wrote a number of pieces about this and they've coined this term search engine manipulation effect scene and they were able to around the, they first did it in a controlled environment and then they also did the study during the Indian elections, the elections in India in 2014 I believe and were able to look at the ways in which that small differences in search engine feeds were able to change outcomes of elections. I was looking into around the US election, I was looking into writing a piece around the ways in which technology companies could influence our influence our democratic agency, let's say, our ability to feel that we are exercising our ability to vote for the person who we think we will do the best job. And this really stood out to me as one of the most persuasive cases. Maybe it's less than a point in some cases. Yeah, less than 1% but the science exists now and so this is able to be replicated and actually when I find it interesting that as these surveys or these research projects are being done in academia, there's probably the same research being done at the campuses in Silicon Valley to see exactly what the effects are of things like a search feed. So I've had reports of, it manipulates what's trending, how the information that gets to us, not so, I guess a year ago, Facebook changed the way that they sorted things. Friends and family now, advertisers are happy and there's also a militarization going on around the influence of beliefs. We see recently YouTube announced that using AI was more effective than humans when it comes to having people, leading people away from terrorist content. So interception, interception granted is maybe the weakest letter but I think it's in my Mimics framework but I think it's interesting because it shows that above any one system are higher powers, that things like prism is now been showed to be able to have massive amounts of upstream collection. That we are now that, I mean if you think about it, prism costs $20 million a year, which sounds kind of like a lot but to be a spy on the whole world, that is, if you got the tap, if you're able to do this, like why would you not, yeah? And I mean, consistently owning large platforms. Although I have to say that I like that AOL was one of the last to be owned. So yeah, there's really, I thought that like with this interception and the way in which the pictures that, especially which is metadata being collected, especially if it's on US citizens that falls outside of 702, that there's a lot of, the lot of this is maybe not even direct information about metadata that email headers are collected. But as Michael Hayden said, the former NSA director, we kill people based on metadata. So this is deadly serious, that these are not just like, these are not just throwaway parts of our emails, these are not just casual conversations on Facebook. If you match the description of somebody on a kill list or a high value target list, there may be a drone coming at your face based on your metadata. Wireless talking when people are finding, people are finding vulnerabilities in systems constantly in places whether it's cell site simulator, stingrays or just out of dates, unpatched telecommunications infrastructure, there's more and more ability for the information that's collected proprietary to leak out. And even, and I think it's also very problematic when IOT devices that exist on a proprietary yet unsecured platform are susceptible to external attacks. Then there's also the state apparatus. Then Shenzhen province in western China where the ethnic minority Uyghur group is, is a very restive area. And there's been a lot of resettlement by Han Chinese and so slowly the ethnic majority has been, has been kind of pushed to the edges of society. But they were actually a couple of weeks ago and having checkpoints, stopping people on the road and making sure if they had state spyware on their phone. Again, what happens at the edge of the earth in the most marginalized communities eventually finds its way back into us that have privilege. And then the courts are another like threat vector when we have one warrant giving the authorization to hack many accounts, when warrantless requests are signed without looking, all these things, it's another ways in which that we can be intercepted in the traffic even though we're communicating in what seems to be a secure and proprietary way. Censorship is an issue that I work on quite a bit. I work as a product strategist for online censorship.org. It's a project by a colleague, my colleague's three years ago I suppose. Feel free to follow us on Twitter. What we do is we have a survey that's been online for about a year and a half now. And if your social media posts has been taken down or if your account has been suspended, you can go onto our website, fill out a short survey, and we take these surveys and then aggregate it with other information that we track. We do a weekly takedown, which every week we put four stories up that made news around content moderation policies, abused content moderation policies. And so within every six months or so we write a report. The last one's available on onlinecensorship.org. And what we're trying to do is in some ways, crowdsource transparency around platforms. If you, we don't really know the takedown, the transparency reports are really slim. They give only a handful of indicators and it's very difficult to make, to draw any stories from their transparency reports. So because they're not giving us the information, we designed a way to have people report their information to us. And censorship is both direct and indirect. Sometimes there can be abuse around censorship. So flagging, if a platform hasn't built in that 100 people are flagging the same people's content at the same time, they've been built in for this, then maybe this content goes down until it's reviewed, but this can also be used in abusive ways. Facebook, we've seen many cases in which that content that did not violate Facebook's terms of service was still taken down. Again, we work on reports and work with other organizations, but it's very difficult for us to be completely thorough because this is still unscientific data. It's what's getting pushed to us. And so try to make recommendations to these companies. Transparency is always at the top of the list, but not just transparency, but enhanced transparency reporting because without solid transparency reporting, it's very difficult to know what's going on on a platform that like two billion people are using every month. So this is what I was talking about earlier. So Google's like, hey, don't worry about it humans. We're gonna let our AI take on the terrorists. Well, who's reviewing? Right? Is there any external review? Is there a public editor, an ombudsman that's able to step into this proprietary AI process and say, oh, actually, a human would not find this offensive, only a machine? And the final letter, siloing. So it's been quite a bit of talk this week. I've been in multiple conversations about the idea of data portability. But I think one of the issues right now is, well, I work on, it's around community archives but also long-term archives. How do we create a digital Rosetta Stone that can last 20 years or 200 years? Is it possible to build anything digital that can last 2,000 years? Well, as long as the information that we have that we accumulate on a massive scale every day, all our Facebook likes or our Facebook content, all our search results, all this ephemeral media that exists only inside the proprietary platform that is not exportable. If this only exists there, if tomorrow Facebook goes down, what happens to this shared digital cultural record of us? What if you're not on Facebook? The inability for interoperability is also concerning. If you get kicked off of Facebook, you can't comment on certain websites. If you get kicked off YouTube and aren't able to, or Twitter, Twitter people have had been hard banned. There's a few people that, most of them were pretty shitty people. But it also goes to show, if companies are able to target individuals, then they are able to eliminate them from a process. And especially if these platforms say to be representative of democratic values and norms, Facebook has a live town halls for the US election, Twitter also livestreams debates. And so while these companies are saying that they are participating in the enabling democracy, they're also institutionalizing walls, walled gardens, which keep some people out. So as we see more of in-app browsing from Facebook and things like AMP, as more and more as URLs are decreased, this is also changing the way in which that we can revisit the same webpage twice. If something doesn't have a URL, how can we send it to the Internet Archive if it just exists inside of a browsing space, a browsing ecosystem inside of Facebook? Again, there's plans, personalized Googles from my colleague, Jillian New York, who's written about this extensively that both professionally and personally, a ban can impact our lives. So I'm not gonna say that I completely agree with this, but I think it's an interesting angle to look at this. It's not that computers are too smart about to take over the world, but they're too stupid and they're here. And so I think that this is, to me, this is one of the philosophical questions that we need to ask ourselves. Are computers smart? Is artificial intelligence really intelligent? Or are the motives of those who control these proprietary systems actually the things that are creating this change? I'm gonna talk so much about biases, but it's a really big topic. I wanna give some examples about the ways in which that more and more of these proprietary systems and platforms are going to impede on our lives. So Yahoo, which was just acquired by large ISP in the US, media company, there's now smart billboards. So this is actually from a year and a half ago, a couple of years ago, but I think it's also important to remember when we're talking about massive amounts of data or patents for this matter, that this data can pass through a chain of custody, that if Yahoo gets bought by Time War, that there is not a social contract or privacy protections in place in order to ensure that our data still falls within the original data terms and conditions. I don't know what to say about digital assistance. I don't really get them, but evidently, I don't know, there was 300,000 of them sold or something on Prime Day through Amazon, so they're here and many people have a, yeah, straight up a spy in their living room now. The IoT space, not really something that I fuck with, but suffice to say, it's a clusterfuck. A big part of finding solutions to this, I believe, is examining who is making the technology, why are they making the technology, who was it for, and how long is it meant to last? Kate Crawford, who's a friend and colleague while working in the same space, has written extensively about this, but I think that if we look at who is creating technology, and who is affected by technology, it's difficult. Why is it that one in four black Americans have faced race-based harassment online? I'm not saying there's correlation, but I think it's interesting that if you look at the policy teams and the engineering teams of these companies, especially Silicon Valley companies, you're not gonna find a lot of black Americans on these teams. So who's life experience, whose background are we taking into account when we're designing these proprietary systems? Cars, our self-driving cars more and more are going to be around, but I'm gonna close with a few last points and then if anybody has any questions. So I think it's important for us to realize that even though many people here at this camp and in our communities are open source until they die, that floss is king, that these sort of things, that many people and a critical mass of people are still gonna exist inside of these proprietary technologies. A second point is due to trade secret laws and speech is code, sorry, code is speech in the US, that there are already judicial protections granted to technology companies to keep their proprietary algorithms and platforms classified as trade secrets. It's not by accident, I believe that these, that there's such little transparency. If we were to pull back behind the curtain and somehow be able to see inside the decision-making processes of the newest technology rolling out, it would probably be a wake-up call. But because we can't see inside, we only see the influence that we're, or the epistemic reality that we get from using these platforms. And that in some way is kind of like blinders, that we are not able to see left or right, but just straight ahead into the screen. That every day these messages that are being reinforced, whether it's politics or music or fashion, that all of these things slowly influence and nudge our agency and create a perception of what we are expected to do. The data is the future. I don't think for us, but like companies now, the valuation of startups in Silicon Valley is really wild. There's tens of millions of dollars floating around to potential startups, and so many of them have data, and invasive data, intimate data collection as a revenue model. Finally, I believe we need to demand more from our machines, by the people that make our machines, the people that sell our machines, and finding new ways to encourage algorithm transparency. I think that transparency is such a big part of this, but also user control. Also, things like forward consent, things like differential privacy settings. But finally, we need to also find ways to rethink the financial evaluation of these companies. How is Google worth a quarter or half a trillion dollars? Yes, it's a service and it makes our lives easier, but this is wrath of God money. So I believe we have to do something or the future will be written for us that failure to change the course of current technology and trends could create a system in which that as more and more people come online, as the next billion people come online and are instantly connected to these information vacuums, masquerading as phone apps, that we can see more and more replication of historical inequalities of bias that socioeconomic status will continue or that technology will not help us gain an increased quality of life and more prosperity, but will keep us in a bubble based on who they think we are. These proprietary technologies are very capable of undermining democratic institutions. Who saw Brexit coming? Who saw the election of the current American president? Some people did, but nobody on my Facebook wall. It's really interesting to think that this is when things that we're able to see these processes in such a powerful way. How could it be that even though I've never seen a pro-Brexit Facebook post and seen hundreds and hundreds of anti-Brexit Facebook posts, that there's these people out there that I am so insulated from, but still exist? What does that mean? Do we live in the same society as somebody else in my same country if my Facebook feed tells a complete opposite picture of the world as theirs? The amplification of a global technological hegemony. That's now the rise of the technology corporation to rival the nation state. Said, who do you want on your side in a war? Google or Canada? Not talking shit about Canada, but I think it's important that we say, okay, how powerful are these technology companies? Finally, I'm also concerned about removal of humans as a primary decision-making force. Let's just say that human out of the loop. Society out of the loop. We're already seeing Israel, Russia, and China work on lethal autonomous weapons systems that are truly autonomous. That without any human input, there's a kill list input into this system. Maybe gather in part from Facebook posts, check-ins, or Google GPS data. So what can we do? I think that we have to work in a way to communicate things to make people realize that they do not have confidentiality of movement or association or communication. How can we create privacy as a default? I think in this community as well, that we've seen one of the places which privacy is truly appreciated, but we have to do more. We have to create more robust strategies to let people outside of our communities know that this is not just a risk of data falling into the wrong hands, but we run the risk of creating a society not built on individual agency, but by technological determinism. Do more, build systems, tools, and processes that enable users to reference personal data collected by platforms. How can I, yeah, how can we advocate to enable Facebook and Googles and the other platforms out there to let us know what they know about us? We need to increase algorithm transparency and increase control over the portability of data, confederated systems or networks that allow us to traverse our platforms. So much to do to support platforms to provide information for your censorship. I watched attendee to the jump from Twitter to Macedon not long ago, and the conversations going around that of why Twitter was no longer the place for many people was quite fascinating. And I think that this kind of gets to this of censorship of the freedom to instance and work in a decentralized fashion. And I think we need to do more to increase the competence and the integrity and security of data as it's stored, saved and transmitted. So the whole life cycle of data. Not just thinking about data as something that exists when I take a selfie, not thinking about data just as something that exists when I'm searching results but thinking about data as something that is a continuum that even though it was like this, it had this meaning to us when we put it in the cloud, the cloud may have also been creating its own thoughts about the same image as long as it's been there. So one more time, mimics, hashtag, monitor, monitor, index, manipulate, intercept, censor and silo. I'm not saying it's the best catch all or it's the most catchy thing but it's been a pleasure to present this concept to y'all tonight. And this is the end of my talk. If anybody has any questions, I'd love to answer. Again, my name is Matthew Stender on Twitter's and I'll keep this screen up here. Thank you very much. So please come to the stage for questions. Hi, my name is Barry. This is a mimics, it's disturbing but there's one more thing that I would like your view about which is shields on Twitter. Like it's not trolls who are engaging in a conversation but more people who are, you know, they're trying to manipulate the conversation but not in a fair way. They're doing it in a systematical way. They're kind of altering the topics. You know, it's just not, it's not, sometimes they appear to be bots. Sometimes they appear to be human or you just don't know of course. What's your view on that, shields? I would say if anything, I would put this into the manipulation category. That the idea of a platform being manipulated not by state actors or the platform itself but by a subset of user base. I think that Twitter is, I don't know. Sometimes I read below and I'm like, yeah, I can't tell if these are people or bots or what. I think it's more and more countries engage in psychological operations as psyops become more of a protected warfare technique. Again, I think we're seeing this rise with, you know, Macedonian bot nets and these sort of things that we have to do a few things. One, we have to get better at spotting or discarding or being able to have better tools to block certain users. Perhaps think about if enough people block something or block one user, can they get reviewed and then be, you know, is there a process involved for a more of a temporary or block if they're engaging in malicious behavior? But when it comes to actually the changing of the conversation, I think on one hand there are people that are being employed to do this. Samantha Bee is an American comedian, or a Canadian comedian who has a show in the US actually interviewed some Russian trolls and they were talking about why they did it. It was kind of for the lulls. And I think there's a South Park episode as well. I don't know if you've seen that where there's a character that basically is a troll but I think it's, what is the motive behind inaction? And I think sometimes it's very hard to tell. Sometimes we just see gibberish. And so I think if there is a derailing, is it to create static in this conversation? Is it to divert topics? And so I feel with some ways we need to look at the motive. I think it's to distract other users who are not, you know, if it's controversial subject, like you me talk about, and then he's overhearing that and they probably wouldn't know that the shield is just trying to make that conversation less interesting. Yeah, I, in some ways I think that it is also up to us engaging in conversations to have a well enough thought through, or it helps if we have a well thought through of what we post, that we stand behind and if it's robust enough to not be able to be derailed. Right. I think that we should strive to that. I don't know if we're ever gonna be able to get shrills or trolls off because where do you draw the line? What is the difference between me on a snarky day or I'm whiskey drunk? And it's like, yeah, just, you know, and then somebody that is sitting in Belarus. So for me, as far as advocating for transparency and free and open platforms free of censorship, I find the troll problem, not the troll problem, very interesting because where, who gets to call it a troll? And I think the same way with that, I would just say that if we're able to create new platforms that are decentralized and we have more micro granular control over who is allowed into these sub more curated communities, that's an alternative, but on a platform like Twitter, I don't really have a solution. Thank you. So practically speaking, Get closer to the mic, please. Sure, absolutely. Practically speaking, how do you see the solution in policy and us demanding more openness, especially if you consider the conflict of interest that both commercial companies have, but also states, and in some cases, those states are not even in our control. So how do you see that? Well, right now the geopolitical landscape, I see Europe as a leader. I think that the GDPR, if we're able to use even strategic litigation and side under that framework, that the European courts have shown to be much more sympathetic to people, not corporations. So I think that on the policy side, that now is actually a very interesting to lobby Brussels. I think that you're already gearing up for strategic litigation once it facets and particularly things like the right to explanation. I think this is a very interesting policy framework in which there's not really been included in any other governing data, yeah, information regulations that I'm aware of. And to me, this idea in which that a policy mandating the trackbackableness of a decision to its component parts is actually a very powerful mechanism to ensure at least a chain of custody that we're comfortable with, to be able to somehow not just have arbitrary artifacts in the machine learning process or an algorithmic process, but to really be able to look back and see where at what chain it was broken or at what argumentative network things were going on. So policy, strategic litigation, I do think it's more and more data stories around really painting a picture of the intercept that had a really interesting piece a couple of years ago about the revolving door between Silicon Valley and Washington, DC. And so I think that more projects that visually map out the number of former Googlers in the White House or the number of the State Department folks running policy directors at Facebook is also an interesting way to do network mapping and help us understand how intertwined are these things? That's part of it. We actually know where and all the motivations and interests that both the State and corporations have, some of them are playing the long game. They have not even matured in their plans. And so I think that the more we can do to make people aware about who are running the companies, we know cabinet members and governments, but beyond Mark Zuckerberg, how many people's names do we know at Facebook or beyond Smits and how many in Page? Like how many people do we know at Google? And so I think it's also about elevating the profiles and putting a little more heat on people that are in these companies to let them know that they're not just making these decisions in darkness but that the world is watching them. Sure, but I think in general that addresses results, right? Things that we actually get to know on. It's all the stuff that we don't know, all the huge amount of data that they're collecting and they're making decisions on that is either in our benefit or not. You know, there is benefits for them to have it done in our benefits as well. But there's no way to control that. And I think it's gonna be extremely hard to either through policy or, you know, except when we do journalistic infiltration or something like that to actually get that type of information outside. I would say, I would have said maybe two years ago if we were here that advertisers and lobbying of board directors, but advertisers in particular might be a strategic way in order to influence the course of the company. But now Facebook and Google together have like 90% of the digital advertising market captured between their duopoly. And so I think that we're actually losing a little way to actually use economic or traditional market-based solutions to nudge these companies. I think they're actually with content collapse in the West, like they're actually now the emerging markets are really where they're focusing. So I think it's also important for us to get ahead and work in advocacy and information training in India and Bangladesh and Central Africa and places like this in which there's large populations and through free basics, the free zero-rated Facebook internet and other things where these things are gonna roll out that we need to be ahead of the curve and already have strategies in place to be working with local organizations on the ground to at least have people be aware of what does it mean when you log into Facebook? Because I think the transparency around the terms of service and actually what it, the legalese of what it sounds like they're collecting and actually how much information they have is frightening. So I, we're the online censorship.org project I'm involved with, we are, our platform is available in English, Spanish, Arabic and Portuguese maybe, I don't know. So we're, even that we've created a multi-lingual framework to capture as many people and as we go on we've translated more languages. So I think it's also about as activists if we create tools to make sure that they are available not just to our communities, but to the world at large. Next question please, from Mike in the back. Perfectly, thank you first very much for the interesting presentation. Basically you give already the answer to my question but I try it another way. It is interesting to see that even for a person like you are, which is aware of the mechanism on the platforms that a stream of text in front of your eyes makes you still questioning your perception of reality in both two ways, like bots created content versus real persons opinion and your perception of experiencing opinions versus opinions you can't verify but being confronted with in your stream. And you answered already, where do you see some interesting activities you would like to raise awareness on practically for us where we can join and amplify them. Well, I encourage everybody to check out the new project that I mentioned earlier, Facebook tracking exposed. It's a browser plugin and that information is able to be used to analyze patterns of how which creates a record of the post ID and then every time you're on Facebook it collects them and so one of the interesting applications of that is to figure out are there any interesting data stories that we can extrapolate from the number of posts even though on a daily basis. On Sundays do we get more ads or around election time do we, you know, are a few key people in our feed getting pushed to us on a, you know, hourly basis. So I think that transparency around the algorithmic sorting is quite interesting. OnlineCentorship.org is again to plug my project but we have a survey that people can fill in so if you ever see anybody on Facebook saying their post has been taken down, especially erroneously, it would be great if you could send them the link and we're really trying to capture as many stories and then turn those into reports and policy proposals or policy recommendations. Yeah, I think like new platforms like Massadon and other places which is like if we're able to create a critical mass on open source platforms or open un-proprietary platforms then it gives them more legitimacy and if we're able to create platforms that are not reliant on traditional advertising models or on proprietary closed platforms that people can instance create new instances for themselves and it doesn't have to be a top-down, then this idea is legitimized more in the eyes of Silicon Valley. That people say, oh wow, Massadon got this big without really any VC funding, how could that happen? And so I think that we also need to show that there is a bit, us as people that are thinking progressively about these issues that we do not adhere to the traditional value in which these platforms think that they give us, that our value is actually in privacy, that this is something we value that the lack of censorship on platforms is something that we value, that we're going to have our voice heard there. And so I think that also raising up alternative platforms and engaging and working to create or be a part of the system, so that the platforms that we want gives them increased legitimacy and visibility that hopefully will sink into the minds of those that are funding platforms and the places that they're funded. I myself never had, and Facebook account never used it, but thank you very much for raising your awareness because that's basically my argument to people once you would know what's really going on there or what's the purpose, you would not use it anymore and that's basically your expression, you supported that very well. Thank you very much. You're welcome. Any more questions from the audience? We still enjoy a minute for question. No? Thank you all very much. It was a pleasure being here tonight. Thank you very much also, Matthew.