 So, again, welcome to the Association of Internet Researchers' public panel to discuss the important topic of who rules the internet. I'm Jennifer Stormer-Galley. I am a professor at Syracuse University in the United States, and I have the honor of being president of the Association of Internet Researchers. I have to do a little shift. Let's see if I don't break the PowerPoint. For those of you who are new to the Association of Internet Researchers or AIR for short, this is our 17th conference. We are an international organization that brings together scholars who collectively seek to examine and to understand the complex information and communication environment that all started with the internet and that now includes mobile phones, social media, corporate conglomerates, and governments that regulate control and dictate the content and flow of the information, entertainment, and community that we experience daily. But I'm getting ahead of what the content of the panel is. Before we begin to dive too deeply into exactly those issues around governance and control, I want to take a moment to acknowledge this amazing city where we are meeting this week. In fact, I got a fantastic little boat tour this afternoon down the spray. Although the weather has not quite favored us, the city architecture and the history provide an enticing distraction and also an important backdrop to the intellectual work that has brought us together. I sincerely thank my colleagues at the Alexander von Humboldt Institute for Internet and Society and the Hans Freireau Institute for Media Research for all of their care and time and support to make this conference an incredible success. I also want to thank Humboldt University for their generous hospitality and providing to us these beautiful facilities and allowing us to meet and plan and talk together. To kick off our discussion tonight, I'd like to welcome Hermatius Graf von Kümenseg. He is head of Directorate of Strategies and Policy Issues. He's speaking on behalf of Frau Johanne von Kahn, Federal Minister of Education and Research. She would have liked to be here, but she had another engagement this evening. In correspondence, she expressed great interest in our topic tonight and the ideas that we're engaging here at AOIR. We are fortunate that she has sent Herr Graf von Kümenseg to welcome us to Germany. And in turn, please join me in welcoming Herr Graf von Kümenseg to AOIR. Please join me in welcoming... Is that okay if I close this one here? Great. I'm still with my paper, not at the digital age. Still a bit on printing, relying on a bit of printing. So, dear Professor Stromagali, dear Professor Schulze, dear Dr. Puschmann and distinguished guests, thank you very much for the invitation to this annual conference of the Association of Internet Researchers, which has come to Germany and Berlin, as I have learned for the first time in its history. I do hope it will not be the last time. I would also like to bring the best wishes from Minister Professor Wanker. It has already been mentioned who cannot be here herself tonight. We have a parliamentary term and there are a lot of talks, discussions and meetings going on. But let me assure you that she takes a great interest in digitization and its impacts on economy and society. This includes many subjects that you are covering in your fields of research and in this particular conference. For those of you that are not too familiar with German digitization policy, let me start with a few words on our national digital agenda, as we call it. It was two years ago that the German federal government for the first time published a digital agenda. This agenda is a multi-perspective approach and includes aims and activities for all ministries from technical infrastructure to legal or security issues. Each ministry is in charge of bringing forward its own topics and the overall process is steered through a high level interministerial group. In terms of activities, it concentrates on the legislative term of 2013 until 2017, that is the next year. Then we have to prolong it and develop it. But with regards to principles of action and to strategic targets, it reaches out far into the next decade. In addition to this, we have an annual national IT summit including preparatory bodies. This forum is used for a constant dialogue with the private business sector, the unions, the scientific world and society. And it is not just about dialogue, there are also quite a number of common strategic initiatives emerging from this forum, such as we call industry 4.0, I guess this branding is already used in German terms also in other countries. So obviously, what is the role of the Ministry of Education Research in this digital agenda? You all know, I don't have to convince you, that digitization is a fundamental game changer and that refers not just to the way our economies offer new services and produce goods in a different manner. More important, it changes the way we create knowledge and we accumulate it. It changes the way we communicate, we collaborate, we decide what is important for our life. So digitization also loosens our ties to time and to place. And we're definitely not at the end, but somewhere in the midst of this rapid process. I just might remind you, Wikipedia was founded 16 years ago. The first iPhone was introduced no more than 9 years ago. Now we talk about the possible effects of blockchain technologies or artificial intelligence. So we definitely do not know how the world will look like in 10 or 20 years. And though we have to make decisions for our own life and on the political level also for our society. Education and research are therefore of vital importance. Good education and research policy does not only react to inevitable changes, rather than that we need to seize the opportunities that come along with digitization proactively. There is no doubt that research is one of the major drivers of digitization. In Germany we are focusing on implementing the Internet of Things into our core industrial branches. We go into autonomous driving, precision medicine, smart living, smart cities and various fields of sustainability. In Germany traditionally we used to have a rather technology driven understanding of innovation processes. With the governmental strategy for innovation which came into power two years ago, we now focus also on the economic and cultural framework, the ecosystems as we use to call this, that allow to grow digital solutions for production and services in a quite rapid and massive way. But research being driver is just one aspect. The other aspect is that science, research and education themselves are users of digitization. So our approach in our ministry is based on several strategic pillars. Let me start with education because it's the basis of all subsequent stages and concerns especially the young generation. And here we have to answer some major questions. What will determine the further development of digital education? Is it technology or pedagogy? Who will be the change agents we will have to rely on? Will it be the attacks of the teachers? So our answer is in the end everything depends on the teachers and the schools and their willingness to take an active role in this change. So we will have to secure the dominance of pedagogy. It is vital that we use digital educational media to contribute to realizing our overall educational policy goals, and not vice versa. So these are participation, permeability and equal opportunities. To reach these goals digital education media offer new possibilities definitely through more individual support, more effective learning, through a widened access of educational offers, greater potential for creative development and a better preparation for tomorrow's world of work. So to bring all this forward we are planning to start an education campaign for the Digital Knowledge Society. That's how we name it. This campaign will develop answers to questions such as how do we enable young people to find their place in a digital society? How can we prepare them for changing workplaces? And how can we avoid the digital divide of our society? Our educational campaign is directed towards all education levels, that is schools, our vocational education and training system, which is of great importance for our economic model and also for higher education. So there we will be talking about new content and about open education resources, about qualification and of course also about the technical infrastructure. And not to forget also especially Germany, we will have to look for a better framework concerning the legal framework, the questions of data security and education friendly copyright law. So all this is done in Germany in a close cooperation with our 16 federal states that have their own responsibilities, particularly in school matters. So equally important and closer to your day-to-day work as researchers is the digitization of science. Modern science creates huge amounts of data. Big data is no fiction but reality. Smart data will be the next step and it is already on its way. The challenge that politics, universities and research organizations are facing is now to create an infrastructure that is capable of handling and giving access to these data. To this end we need new scientific methods and new rules for accessing and deploying data. We call it securing the life cycle of data. So it's about creating, saving, analyzing, verifying, then second and third use, readability when technical standards will have changed. So we are talking about quite some difficult questions of governance. Governance too. Who will decide what to save and what to delete? Who will be responsible for each stage in this life cycle? What are the criteria of access for third parties, be it from the scientific world or be it from the business world? Another challenge lies in hardware. The use of big data goes along with hardware requirements, in particular high performance computing. The federal government and the federal states, which are responsible in Germany for the basic funding of the universities, are working on a common approach to meet these challenges and we plan to launch an initiative of action next year. While doing so we will rely on advice from a newly established council for scientific information infrastructures. This body consists of representatives from scientific institutions as well as from society and politics and it accompanies the digital transformation and science. Only in September, that is three weeks ago, we have announced an open access strategy of the Federal Ministry of Education Research. With this strategy, we want to promote open access as a publishing culture that promises wider access to scientific knowledge. To this end, we will start a number of measures. That includes an information campaign to spread knowledge, a dialogue forum to discuss open questions and exchange experience. It's about counselling and support for innovative approaches and it's about monitoring of progress. And we have added new provisions to our own funding conditions. Any article in a scientific journal that is based on the results from the projects financed through our ministry shall be published open access. So, from my last point, I'm sure I do not have to convince this audience. You all bear witness that digital transformation is a research topic in itself. The societal implications of digitisation are way more comprehensive than those of other technological innovations. Hence, we need to know more about possibilities and consequences of digitisation. And to this end, the Federal Government has decided to establish a German Internet Institute. We do know that there are strong players in Germany in the field of digitisation and research. And I assume a number of you among those is among this audience. A number of you might be working in German research organisations and institutions and universities dealing with that topic. So, the government has decided to take advantage of these resources and to give an additional incentive to establish a strong research institution of international excellence. It shall contribute to a clearer evidence base by investigating a number of issues with strategic importance. It's about access to and participation in the digital world, privacy and self-determination, governance and regulation, and also, in a broader sense, consequences for work, for value creation and our democracy. So, we want to support a broad interdisciplinary expert team of social scientists, economists, legal experts, as well as information scientists cooperating in this German Internet Institute. And we expect that knowledge transfer will benefit society, politics and the economy. And of course, we do hope that the future German Internet Institute will contribute to international scientific debate like this one. The competitive selection process for the German Internet Institute is just underway. We expect the winner to be announced next spring. So, you will know it when you will have your next conference in 2017. Our ministry is willing to spend up to 15 million euros for the first five years to kick-start this institute's work. And then we'll have an evaluation and see how we can go on. So, dear ladies and gentlemen, this short input was meant to highlight important policy measures that accompany the digital transformation in Germany. For the rest of your conference, you will go into many more detailed questions. You probably have done so yesterday and today. And I'm very interested in also listening to the following panel. I'm sure that will be very interesting and we'll have, and I do hope, so intense discussions. Let me just make one last remark. So, the overall title of this conference is Internet Rules with the exclamation mark, as I've seen. So, there are some profits of a digital doomsday soon to come who say it's not Internet Rules, it's rather Internet Fools. So, this is not my position to make it clear. It is still up to us to be the fool or the wise. But I do understand this title as a kind of provocative headline to draw the attention to what will happen if politics, science, economy and society do not find common answers to the new challenges. So, the political actions I've laid out in my remarks all aim in the end at the same purpose that we will be able to say also in a digital world it's not Internet that rules, it's still democracy that rules. We do need you and your work to have success by pursuing that aim. So, thank you very much for your invitation and your attention and have good discussions further on. Thank you very much. So, now, thank you very much for those remarks. I am intrigued by the idea of a German Internet Institute that is supported by the government. Perhaps we need to challenge the United States to consider something equally important for this work. Now, for what we're really waiting for is this lively discussion. And so, for that, I would like to introduce Cornelius Pushman. He is a senior researcher with the Hans Brido Institute for Media Research and the Alexander von Humboldt Institute for Internet and Society. He serves as AOIR's program chair which means that he gets the enviable task of envisioning the conference theme. Now, his idea to tackle the challenging topic of who owns the Internet was a prescient one and I look forward to this public panel discussion. So, please join me in welcoming Cornelius Pushman. Thank you. Oh, this is on already. Wonderful. So, I will proceed straight to this position here for the simple reason that I want to no longer keep you waiting and proceed straight to introducing our wonderful panelists for tonight and I will start, I will introduce them one by one and they will take their place and then after they have made a brief statement, a brief provocation on the topic of this panel discussion and then after they have all three made their statements, we will actually start discussing. So, don't worry in case you were concerned whether this would actually take place. So, the first panelist that I am absolutely honored to introduce is Kate Crawford. Kate is an expert on the complex social impacts of large-scale data, machine learning, and artificial intelligence. She is, as many of you know, principal researcher at Microsoft Research, a visiting professor at MIT and a senior fellow at the NYU Information Law Institute. She is also on the World Economic Forum's Council on AI and Robotics and she recently co-chaired the White House Symposium on AI Now, the Social and Economic Implications of AI in the near term. She is also a member of the Feminist Art and Technology Collective, D-Lab. So, give a warm welcome to Kate Crawford. Well, good evening, everyone. Thank you so much for being here and thank you Cornelius for such a lovely introduction. It is, for me, like a complete pleasure and an honor to be presenting here at AIR 2016 and to look out and to see so many friends and colleagues in the audience. So, when Cornelius asked me to think about this question, who rules the Internet? First thing I did was, of course, look at my bookshelf. And what did I see? But a whole lot of people in this community who have written books on this very topic, some of you are actually in the room and I'm looking at you right now. And what was interesting was flicking through these books, I realized how much this group of researchers have been so instrumental in moving us beyond the somewhat naive idealism and utopianism of the 1990s where we had this idea that somehow the Internet was borderless and control-free into thinking about something much more granular, to think about the social, the technical, and the infrastructural layers. So, I think what's happened now is that we can really think about the Internet as a very complex stack. It begins with, say, the network protocol layer. Then we have the platform layer. We have the infrastructural layer of data centers and relay switches. We have the satellites above us. We have the marine cables below us. So, of course, we also now have this much-hiked Internet of Things layer. Billions of sensors from these biochip transponders inside farm animals to zombie networks of webcams used to mount enormous DDoS attacks. When I think of this work, I think of people like Anne Galloway, Jonathan Stern, Biela Coleman, Paul Dourish, Talton Gillespie, Nichols, Stara Slesky, Shannon Matten, Kirt Loving, Nancy Baim, and Jeff Balker, among many, many others. All of you have actually shaped my thinking and understanding of these networks of Internet governance. But I want to share a new problem with you tonight. Basically, we have a situation where machine learning and artificial intelligence layers are being rapidly deployed into our social systems. And I think they are now posing some very hard questions about control and governability. So, to be clear, I'm going to be talking about narrow AI tonight. That's the kind that performs specific tasks that sometimes has personalized, feminized names like Siri, Cortana, Alexa, and Viv. But most of the time, these are faceless, nameless back-end systems that could be doing things like, for example, image recognition in Facebook and Google. And their intelligence, such as it is, depends on ingesting as much data about us as possible in something that Shoshana Zuboff has called surveillance capitalism, this twin imperative of engineering and finance to get as much data as possible because it makes your company look more valuable and it also trains the underlying machine learning algorithms to be smarter, or at least to simulate smartness. Of course, AI will never be truly autonomous from humankind, as Matteo Pasconelli has written, and neither will it be free from the vagaries of capital. It is both distinct from, and I think, imbricated within human governance systems. But crucially, AI and decision support systems are now moving beyond what we've traditionally called the internet, this weird exploded category that keeps getting bigger. They're now being infused in this wide range of social institutions, influencing who is released from jail, what kind of treatment you'll get in hospital, to shaping the very news that you see. Now, I know many people in this room were as fascinated as I was last month when we saw that Facebook decided to censor a very famous Pulitzer Prize-winning image of a nine-year-old girl fleeing napalm bombs in the Vietnam War. What was interesting about this process was that we still don't know for sure what combination of automated content detection and human teams actually made this decision, and that's a very interesting set of interface questions there, too. But, of course, the image of a naked girl is going to trigger a set of automated processes that say this is a violation of our policies. So what did happen is that the image was removed. People noticed this removal, and we had a massive international outcry to the point where the Prime Minister of Norway was saying that Facebook was effectively redacting our shared history. So the image was returned back to Facebook. What was Facebook's return to Facebook? And you might think this is a very happy story. What I would like to suggest is that I think it's actually a rare instance of where we get to see this kind of process happening. It is a tip of an iceberg that is predominantly these kinds of decisions simply don't get that kind of attention. And rarely do we have a Pulitzer Prize telling us which images are so important that they simply can't be removed. So what I think is interesting here is that we're pointing to a much larger mass of unseen hybrids of automated and human decision-making. Most of these don't garner any attention. They're embedded in these back rooms. They're working at themes of multiple data sets at once with no consumer-facing interface. Their operations then and their rules are not apparent to us, which I think isn't itself not necessarily new. There are many opaque or uninterpretable human institutions as well. I love this quote from Hannah Arendt, who was writing about the Pentagon back in the 1970s, and they just started to use large sets of data. And she wrote, So she says it well. I think we're having another moment like this with AI, this irrational confidence in the calculability of reality. So how might we do something about this? I want to speak very briefly, just about three points so that we can move on to our amazing panelists. The first question is, how can we do something about this? How can we do something about this? How can we do something about this? Let's move on to our amazing panelists. The first response might be, we'll make them transparent. Open up the black boxes, as Frank Pasquale might say. I think there's going to be some problems with this approach. I have a forthcoming article with Mike Anani, who's also here tonight, where we look at the long history of the transparency ideal and where it just doesn't map very well, particularly to machine learning systems. Some of you will be familiar with how deep neural nets work, but certainly what we can say is the engineers who work on them will say, hey, this is really great at recognizing cats or recognizing human faces, but we don't necessarily know why. Even when you have all the data, all the inputs, the models, and the outputs, you can't necessarily know how a DNN is working. And what's interesting is that just recently, why Combinator has installed an algorithm that they call HAL9000, nice one, guys, that is being used to determine admissions to why Combinator is a very hot program, and Sam Altman said with some pride, we don't know how it works because it can't tell us, which is great until such time as you are being judged by a system like that or being denied opportunities, and there's simply no way that you will know how it's working. So I think the issue here is that even if we had perfect transparency, it does not equal understanding. How do we govern what we don't understand? The second approach is that we should think about investigating these systems from the outside. We should poke them and prod them and see what they do and try and do some reverse engineering. I think there's some really excellent examples of how this is being done. I'm particularly thinking here of Christian Sandvig's work in algorithmic auditing. He sadly couldn't be with us tonight, and my thoughts are with him. You might know that he is also the defendant in a ACLU, American Civil Liberties Union case, where they're trying to remove the research restrictions in the Computer Fraud and Abuse Act. It's a really important case, and it's a really good reminder that there are very real legal structures that are preventing us from doing this kind of auditing, this kind of investigation. So finally, the kind of more novel approach that's emerging now is that people say, if we have a problem with governance and AI, AI can fix it. So this is something that we're starting to see from groups like the finance sector, who have been dealing with high-frequency trading algorithms for some time. They've now been creating a weird ecology of trading phenomena that, according to recent research, is now full of predatory machine learning algorithms that are ganging up on other algorithms. It's extraordinary. This is something that Tara Carpy, who's also here tonight, and I have looked at, and it's a truly fascinating set of sub-second phenomena. So in a case like this, it's tempting to want to create something like a police AI, a master algorithm to bring all of the kind of unruly algorithms into line. But I think there are some real traps with trying to build an AI that watches the AI. And what I've been doing is going back to the early books of AI that were written in the 1970s. My favorite is by Joseph Weisenbaum, who was a professor of AI at MIT, and he developed Eliza, the much-loved conversation simulator. But he started to look beyond his field in computer science and started to ask hard questions about the limits of reducing human decision-making to the formalism of computer logic. And he suggested that if we continue to apply artificial intelligence to social systems, it will gradually become, in his words, a slow-acting poison. So while that might be a pretty dark turn of phrase, I think it's a useful reminder not to be too seduced by AI governance and that that is why we need research communities like this one to continue to develop critical and conceptual frameworks to account for these new machine logics that bind human and non-human rulers together. Thanks. Thank you so much, Kate. I think there's a whole field in naming decisions of things like HAL 9000. So I think we should look more into how science fiction is intervening in decisions about actual technology design. So the second speaker that I'm honored to introduce is Caroline Gallitz. I should start by saying that Caroline Gallitz very graciously filled in, stepped in for Christian Sandwick. Christian, as Kate already mentioned, couldn't be with us. Many of you know the circumstances under which he cannot be with us and our thoughts are with Christian at this point and we hope that we will be able to somehow make up for this at a future ARIR. So Caroline is professor of digital media technologies at the University of Ziegen and a member of the Digital Methods Initiative. Previously, she was assistant professor in new media and digital culture at the University of Amsterdam. Her research explores the various intersections between digital media, methods, and economic sociology with a specific interest in web economies, platform, and software studies, brands, value topology, numeracy, social media, digital methods, and issue mapping. So give a warm welcome to Caroline, please. Thanks a lot, Cornelius. Great to be here. So we have the nice task to talk about the question who rules the internet and after having spent two days with all of you guys, I have a very short answer to this and it is platforms. However, I thought that would be a too easy way out of my task tonight just to answer this, so I'm going to answer the question so how do they rule the internet? And I have a couple of answers to that. So first of all, platforms set out very, very strict rules for what users can be, profiles, bots, and also what users can do, friend, comment, share, like, which are very strictly defined in form, but can be kind of open in interpretation. I'd like to think of that as a grammar of action drawing on the work of Philip Eger, whereas other colleagues such as Anna Helmand and China Bucher today theorize this kind of platform rule as their affordances. Whereas these rules of what users can be and do may seem very rigid in form, they can be open and flexible in interpretation. So before Facebook introduced its reactions, people were using the Facebook like for a number of different purposes, put as effective response, as ironic reaction, as way to gain attention. However, with the introduction of Facebook reactions, each of the single effect was given its very own strict form and that disambiguated the like button. So rules of what users can do basically slice down user action into quantified little data points, turning them algorithm ready and potentially opening them up to be endlessly recombined. However, that recombination is subject to yet another set of rules. Second, platforms create rules who can access their data and determine what can be done with that. In the platform literature, this kind of rule set has been often referred to as the programmability platforms. And platforms realize that rule through their APIs and their accompanied codes of conduct and documentations, which determine which data can be extracted by whom, how often and what quantities and what it can be used for. On the one hand, they incentivize developers to play around with the data to kind of outsource some forms of innovation to them. However, platforms also like to kind of keep the interpretive flexibility of that data in control and adjust the rules surrounding their APIs. And a couple of you have been working with tools to extract data from platforms have probably over the last years experienced these changing rules and their impact on your own research, seeing the Twitter API change and only allowing researchers to access 1% of their sample data, seeing the Facebook API change and limiting Facebook research tool and last year Instagram changed its API rules, disconnecting a various number of tools from the API and disallowing researchers to hold platforms accountable. So, API rules determine who can participate in a valorizing platform data but also holding platforms accountable and giving a public account of what actually happens on platforms, as my previous colleague Bernard Rieder argues. Third, when encountering and dealing with platforms you're not only subject to the rules that platforms themselves set up. Increasing the engagement with platforms doesn't happen only directly via the platform but is mediated through alternative clients, cross indication software support apps, which become intermediaries of platform rules and add their very own rules on top of that, creating a kind of cascade of different rules that users have to engage with and which can also create different kinds of fiction. Fourth, the notion of cascading rules can be taken even further, platforms don't only extend into apps but also into the web as many of you have been writing about through the implementation of plugins, logins, platform facilitated comment sections or advertising. The rules of platforms are there intermingling and interfering into the rules of open web standards and if Christian Sandvik would be here today he would definitely be talking about the ways in which platforms are becoming increasingly central infrastructure role providers of communication and social expression. And in this sense the platforms create the rules for making web data platform ready, as my colleague Anna Helmand argues on which so many webmasters rely to grow their audiences. And in that sense the rules of platforms matter once they extend into the web even to those who decide to not even have a platform profile in the first place. And finally, there are these various rules regarding contact. My fifth point, which include algorithmic sorting of content that operates in a very highly situated and contextual way responding to both data and user action. Various legal frameworks, their enforcement or the lack thereof as the discussion of hate speech in German Facebook over the last month has shown but also temporal rules of platform that determine how long is content being shown at what speed and how commercial content is being kind of fed in trying to identify the right time as Tainer Bucher argues to be shown in fast paced feeds. So these different rules which are definitely not extensive can be characterized in a number of different ways. So first of all the rules that platforms create can set up a negotiation between an openness and closure of what can be done with platform and their data. They are often partly or completely invisible or opaque. And platform rules not necessarily come by themselves but are kind of layered with a lot of intermediary rules leading to a cascading of different kind of rule sets. And most importantly and this is a point with which we finish today, rules in a relation to platforms cannot be discussed without talking about modes of valuation. What counts as value? For whom? And who actually counts? What is central in the context of platforms is that their rules enact logics of both Taylorist and post-Fordist economies and their modes of valorization at the very same time. So Taylorism refers to the micro management aimed at mass production efficiency and scalability slicing processes into small, countable and discretized units which can be monitored, evaluated and scaled up. And this is exactly what platforms do. They allow us to act but in very kind of fine-grained, data-fied and algorithm-ready ways. And basically, platforms create the rules for the miles production of social life as users are incentivized to perform highly specialized action in a repetitive manner. However, platforms don't earn money from users performing the same actions from users liking the same kind of content and developers doing the same thing with platform data. Platforms earn money from exactly what post-Fordist economies evaluate. Effect, creativity, cultural and symbolic productions. The rule of platforms are about to speak with Lazzarato, putting life to work by rendering life in Taylorist data points that can be counted and measured, but also by allowing these data points to be part of interpretation, effective experience and social relations. The rules of platforms are enacted through counting, informed by what counts and determine who counts. Thank you. Thank you, Caroline. So we turn to the third and final panelist and I'm very happy and very proud to introduce someone who is not a scholar or at least not a full-time scholar, but somebody who actually does things with and through technology. So Fiki Janssen is researchers and writes on the politics of data and digital shadows at Tactical Tech. And Tactical Tech is an NGO that does things through technology and that makes technology usable, accessible, hackable, malleable for people. She hopes to bring more transparency to the global data industry. Prior to moving to Berlin, Fika worked on the intersection of the internet, social change and security at HEVOS, I hope I pronounced that correctly, to set up and manage their digital emergency program for human rights defenders and activists. She is also co-authored a book called Digital Alternatives with a capital A and a capital N. So thank you. Please welcome Fika and thank you Fika for coming and joining us. So thanks, Cornelius. As he said, I'll be talking a little bit from a different perspective because we mostly work with people on the ground. And I want to thank Kate and Caroline. Kate for giving sort of a sketch of the current and the future situation and what are challenges to data and AI and Caroline for showing the lock in of the platforms, basically. So when Cornelius asked me the question who rules the internet, I mean for me the answer was quite clear. Of course, it's the big five. I'm not talking about elephants and tigers, but I'm actually talking about the big five, Apple, Google, Microsoft, Facebook and Amazon. And I think this maybe, this is a common knowledge. So then I questioned how did we let it get this far? I mean if I talk to users, which I usually do for my day job, there's a certain unease with this. Maybe Google grew up as being the alternative search engine but we all know it's no longer like this. So this is certain unease but people are not able to pinpoint what it is. For instance when we work with activists, we ask them to draw the internet and most people draw a cloud. This is very different than a physical infrastructure with very big corporations behind it. I was just in Myanmar giving a training where Facebook is the predominant access provider to the internet and when we asked so what will happen if the government bans Facebook and you should have seen it on Facebook. And then in the same world, I was working with an open source company and they used Gmail as their email provider. For me this is quite sort of contradictory. So when talking about who rules the internet, my question was this morning, how did we let it get this far? How did we get this far? How did we get this far? How did we get this far? How did we get this far? And maybe not go into the big five right away. I'm going to take a step back and I want to give two examples of sort of small interventions that raise a lot of questions and I think this is sort of also where my talk gets inspired from. It's sort of the intersection between activism, art and politics because let's face it, it is political. So the first example was Bitnik. I don't know if some of you know it but it's an art group, a collective and they created a dark net shopper. This project is amazing. So they basically created an algorithm that shopped the dark net and bought random stuff for 50 U.S. dollars. They sent this to a gallery. Of course what you can imagine they bought drugs, among other things. So drugs were in the gallery and then the police walked in because they figured it out. But then who are you going to blame? The algorithm? The gallery for actually receiving it or the creators of this dark net shopper. And this was 2015. So when we talk about algorithmic responsibility we always think about the big companies but actually the things that drive a lot of social conversations are the really small examples where all of a sudden it becomes clear that there is a project called IC Watch. It's done by somebody who is based in Berlin and basically what he did was after the Snowden leaks he made a data set of all the program names that the NSA uses to spy on us. And he scraped LinkedIn. With this he was able to identify people who worked for the Secret Service. You can have an opinion on this. And I think most opinions are very different. But what was quite interesting is that all of a sudden because it was a Secret Service it triggers this debate. What is public and what is private? Because officially we're not allowed to know who works for a Secret Service but these are also people who are looking for a job and where do you look for a job on LinkedIn. So this sort of triggered an entire discussion and of course it was anti-patriotic. But isn't it a little bit weird that something that a coder has developed is questionable yet this is data that LinkedIn sells every day. But we don't question it. So I think like the small creative interventions trigger social discussion about what's acceptable what's not acceptable but we fail to sort of extrapolate this and see that the big five this is basically the bread and butter and we allow it. So this was my introduction before I go to the big five I already named them but I want to unpack two of them. So let's start with Google, maybe the biggest of the big five. And it is quite fascinating because Google started as the alternative search engine or at least this is how in Europe it came across and of course we got Gmail it's called Alphabet now but in 2014 I think they were classified they were worth 400 billion US dollars. They're the second biggest company in the world. This is more than the GDP of Austria and it's all on our data and we've never asked that question. So maybe because it's free we don't ask these questions or maybe because the companies are so big that it sort of slips out. And the thing is they're ever present in our lives. So we all know probably if we start knowing it down we might know 10 services we use. In reality we probably use a lot more but we have Android Gmail Google search. These are all quite sort of in our face services we use and we choose to use every day. And then of course there's a lot of services that Google offers that have become less sort of in our face and I think Kate was talking about this as well when we go into the sensory world it becomes even more difficult to decide whether you want to be in a lock-in of a company or not. And then my most favorite part about the big companies is it is political. And we did a mapping study about together with University of Amsterdam in the summer school like what is the revolving door between governments and companies. And I think in Europe this revolving door was always sort of maybe present but it was never talked about. So in Brussels you have to register how many lobbyists are there. So Google has nine and I happen to know Google only because we looked at Google by scraping LinkedIn. So you can also use it on them and seven of them come from the European Parliament. Seven people used to work for the European Parliament now work as lobbyists for Google. This is a revolving door and I think these are all the aspects of the Big Five we don't see. Then I wanted to sort of unpack a little bit Facebook because it's the other one of the Big Five and it's more about scale. Whereas I think as Google is very smart they have a lot of services they have so much wealth and power and knowledge you have to sort of respect them. Facebook has so many users. So according to the steps they have 1400 million users a month. It's quite a lot. They have 4300 million likes a day. This is all data work produced. They're worth 190 billion US dollars. And there is a lot of political influence and I think this is the step that we're missing always the political influence so where you can see for instance the lobbyists who get officially registered. What you see with Facebook is that they have an enormous drive to conquer the global south. So you see all these amazing images about Mark Zuckerberg with the former president of Brazil, Dilma he's now conquering China they're trying to roll out Facebook zero and they're selling it as the internet. And hardly anybody's questioning is this the internet. So there's a lot of political influence and political capital. So then the question might be who rules the internet, is it only the big five? And this is where it becomes interesting. Because here you could maybe say it's the 1% of the 1%. Because it's not only the big five companies it's also the very limited amount of people behind it who are financially backing all these companies. They're called venture capitalists. And I find it quite fascinating that they're all centralized in a very specific location in this world. So you have two venture capitalists called Sequoia and the other one is Kleiner Perkins they invested in Google, Amazon Apple, PayPal and Yahoo. Most of us use all of this. Then you have Peter Thiel who started with PayPal, sold it and he became a venture capitalist himself. So he not only started Palantir which is the best kept secret of Silicon Valley but he's also invested in Facebook and LinkedIn. And you have Google Ventures which is their own venture capitalist. Now that they have a lot of money they invest in Slack, Uber, 23andMe which is the DNA swipe your cheek and find your DNA and over 300 other companies. So it's not just the big five but in the end it is a very small group of people who own the internet. And Cornelius asked us to make a provocation. So I thought maybe also we have to not just focus on the big companies but take a very critical reflection on ourselves on how we actually let it go this far because we might find it political but in the end most people still use it. They're not questioning it. It's free, it's easy, we'll use it. Speaking on experience where I mostly use only open source it's not so easy. Yes all these services probably work better but somewhere you also have to make a decision and so my question is like why do we push the ethics and the social norms that are being pushed by small groups? Why do we question this publicly? But we fail to question and actually act on it that we live in a very centralized society where there's only a very few actors who have centralized knowledge, wealth and power and we're actually feeding the beast on an everyday basis. And it's not just us as individuals. Our universities our governments are doing the same thing. Governments are looking at Facebook and LinkedIn and Google to find the answer to very difficult questions in society. If it was so easy we would have fixed it already. Universities going away from their own mail providers and using mail providers like Gmail as their decentralized as their default platform. I mean we have to become more critical users otherwise we will remain in this lock-in and we actually don't have to answer that question anymore. That's it. Thank you. So thank you very much. So we have heard perhaps not the three most optimistic assessments of the future present and future of who rules the internet. So I will not delay us and in order to get to discussion both amongst ourselves and then with you very soon. Let me just point out and I will be briefed one thing. There are people in this room who study internet governance. I think they're sort of question I wouldn't say the assessment because this question of who rules the internet can obviously be answered in very different ways. But I'm very grateful to Graf von Kielmann's Eck for introducing an important actor in this discussion into the debate that I think was I wouldn't say missing because it came up in your assessment but overall I think at least strongly backgrounded in relation to older or still ongoing governance debates namely the government and state actors and other actors which contribute to shaping the internet and historically have contributed to creating the internet in the first place. So one thing that I noted down is the open web is apparently already dead if I listen to your assessments. We apparently no longer have it or are on the verge of losing it. So I'd be I'd like to point that out and for us to consider who these different actors are and that the government is maybe another actor that plays a role. So I will play longer Kate in a nutshell I'm simplifying said that AI perhaps not rules the internet but is an important force in shaping the internet. Would that be correct? Well I think it's really difficult to pinpoint this thing that we call the internet. I think that's going to be a really interesting question for all of us in this room who think about these questions and research them. And for me what I find really interesting about the turn to automated and semi automated decision making systems is that they've been with us for a while. They're underlaying a whole series of networks and in many ways you may not even be aware that they're there. So in terms of how they act it's less about are they ruling and it's less about who and it's less about the internet. I think each one of those terms is starting to get very seriously fractured into a set of questions which is more about what and about these entanglements of forms of governance and I think is becoming a very significant actor very quickly. And to echo what Fika was saying which I think is really interesting is we talk about the top five the big five technology companies in the AI world it's incredibly concentrated I'd probably say seven because we'd include say Baidu and IBM. But that is seven companies who aren't just delivering us email or giving us platforms. They're deciding how we get into university or not. That is a system that will determine healthcare. That is a system that will determine whether or not you get housing. And these are filtering far beyond what we traditionally understand as the internet. So that's the governance layer that I think is most interesting to me right now and I think governments do have a role to play there. So I noted down one question I made many notes and have many questions but one thing that stuck out to me is a question that you asked and I will force you to answer it or try to answer it which is how do we govern what we don't understand how do we maybe begin to govern what we don't understand what could be passed to governing Thanks Cornelius. That's the hard question look I have to say if there's something that keeps me up at night it's that question that we look to governments to figure this out and to know exactly what to do and of course governments are looking to us they're looking to a combination of people who are academics who are in the technology sector sometimes also activists and people on the ground and NGOs and everyone's looking at each other saying how do we deal with these systems I think what we urgently need to do is to produce some really strong interdisciplinary research groups working on these issues certainly that was part of what we tried to do around AI now was to bring people from very different fields together I know that's something that a lot of people here understand the value of but I think we're still spreading the word around why interdisciplinarity is going to be crucial and why these issues can be solved by computer science alone I don't think they can be resolved by social science or critical humanities alone I think we have to be working together that's to me how we're going to figure out how the systems work and then potentially think about what kind of government systems are appropriate I'm not going to pretend that's easy I think it's the biggest challenge facing us for the next 50 years So a call for more interdisciplinary research that obviously needs a lot of funding in the future What do you think about that? So Carol to turn to your comments I was at a Council of Europe event in Estonia last week and there was also discussion about the role of Facebook there and there was a critique of the Facebook news feed and the biases inherent in the Facebook news feed and then somebody said well calling essentially for more regulation or saying what about somehow regulating and then the response to that came was imagine Europe would somehow decree how what the Facebook news feed should look like like have a quota like it exists in France for 25% content of a certain variety or things which are good for democracy how would that fly with users and what I'm trying to say is this what if others are making the decisions that you were saying are made by the platforms not only will that work but will that put the government into even more disrepute than it is presently enjoying is that not bad Maybe not in relation to Facebook but in relation to Twitter this is partly already happening so during our digital methods workshop yesterday it was addressed that some countries are able to allow the deletion or the kind of the hiding of certain content on Twitter but that this kind of content can be made visible again via the API so this is already in place in various different kind of forms and also what is also in place is the fact that content is constantly kind of situated and localized based on your specific location so we are dealing in a situation where imagining like national governments having some kind of influences on the one hand completely weird however location has a very strong influence on what content is being shown and what you can actually encounter so the kind of the uncanny notion of location is can be detected and I would actually like to also add something to the question that you asked to Kate and the starting of the discussion and that is answering the question who rules and with platforms or with specific actors the actors that we drew up they they rely on the kind of distributed accomplishment also I think Fika nicely pointed that out in her presentation that yeah platforms cannot be accounted by themselves but they require on the engagement of users who basically have to use them so those different actors who are involved in ruling are all reliant on the involvement of other actors okay but is there an alternative if I forced you to propose some sort of alternative what would that alternative be you know to the what we have how else should we be running these things there's the Chinese option there's the Chinese option there's the Russian option there are many options are there any better options than what we have that's a question I think I cannot answer this is a question I would like to kind of put back to all these 500 smart people and I think we should spend the next two days thinking about that my answer because I can't just kind of escape by giving it back to the audience my answer would be to acknowledge that kind of distributedness to which we have to respond and react and to understand that we cannot understand maybe algorithms or AI per se but have to understand how they operate in specific situations and adjust the relations in our lawmaking maybe to the fact that there is no one understanding of that so I'll step up the detraction even a little bit more Fike, what you talked about so putting it as extreme as possible A, are we whining because we as Europeans are feeling essentially sort of colonized by refugees from another country which are extremely dominant and are we sort of using our iPhones and other things and then saying in a moment of rapt reflection ah, this is all really bad and then we go back to checking what I'm trying to say and I'm not, no no I am serious a little bit because you can have endless panels like this one debating how bad this all is and I mean is this going to happen, that change is this and I don't say this being such a pessimist I say this if people really did care that much then maybe more would happen you I believe said they don't act because they can't because they're not powerful enough and I mean people have acted around the environment in very strong ways they have acted around a lot of social issues in very strong ways so maybe the problem is this is not a significantly big issue for many people I think there's a few issues around this and one is do I think where Europeans are uncomfortable with American from a libertarian point of view trying to influence the continent yes I think if you would phrase it this way because I think what I see not only in Europe because I mostly work in other parts of the world so not in America and mostly not in Europe but in different parts of the world is that the internet is always seen as something still something magical, it's neutral technology is seen as something neutral and those who research it, who know it who code they know it's not but the thing is like it's very difficult for people to have frameworks to think about data, internet, they're very abstract concepts if you ask somebody for instance in Asia you ask them do you use WeChat and they say yes and my friends are on it and then you ask them so who owns it and they're like a Chinese company and all of a sudden this light bulb appears on their head because it's very difficult to make sort of critical choices on things that are very abstract so I think once you put it back into a rhetoric or a political framework that actually works for people, how they grew up it becomes at least easier to understand it and this is not being derogative but it is if in South America I think the best way to rile conversation is to say that Facebook Zero is the new form of colonialism trust me you have a political conversation if you keep it to Facebook Zero itself and access to the internet nobody engages so I think it's also about making it framing it in a way that actually makes people that fits their reality and then I think that's only part of the problem I think the other part is that people are really uncomfortable but they don't it's difficult to pinpoint why and it is because we still think as users that how difficult can it be it's maybe about the one text message we send it's about the one email that maybe Google reads it's about the one Facebook post but it is about the entire data set it's about the patterns and behaviors that is actually interesting and it's about the data that you actually cannot control because we still think about control in a way that we can control whether I send an email whereas most of us know it's no longer on that scale anymore so then it becomes very abstract again for people to sort of wrap their mental frameworks around and I think these are two very difficult things in when you talk about are Europeans just whining or is it all this dark because if you don't if you're not in this field of internet research most people have not thought about it in ways about your provocation that why are we still using these systems when we could actually make a choice to leave I do worry though that we're trying to put so much responsibility onto individuals to sort of step away from the evil platforms whereas in reality there are so many reasons why people can't the opportunity costs to employment to their friends to their families are so high but even beyond those arguments which I think have been made for years there's this new argument which is that even if you didn't have an iPhone it actually doesn't matter anymore if you're walking down the street you're being collected by a whole range of senses if you are engaging with people who are also using those devices you are then part of this broader tracking universe so what is interesting is that this idea we still think we have agency about our participation in these processes at an individual level I really don't think we do anymore or at least that sense that we do is very rapidly diminishing I completely agree and this is also what I meant with saying sort of we have a feeling of individual piece of data we transmit whether it's usually about all the data collected that's beyond our control so but I just I agree with you but I don't agree with this I find it frustrating that that means that we roll over and die whereas you think like we should stand up and like make different choices and maybe that's excuse that tips the scale 5% but that might actually be a room for other interventions to happen and other critical questions to happen and maybe other mechanisms to play I think that's interesting that we can talk about your question Cornelius which is like what are the political levers rather than assuming this consumer choice model is going to be enough to change things that actually it's when we stand up collectively and start talking about what is acceptable what's not acceptable, what are ethical parameters what are legal parameters what are technical parameters I think that seems to have a lot more potential I'm not going to buy this non fair trade kind of version of dealing with technology but I think it's going to be I think this is what concerns me most is that I don't think we yet have sufficient collective models of how we think about changing information ecologies and they're starting to move into so many new areas of everyday life that I think that's the big challenge so I think we instead of rolling over to die we should assert our agency now and take all of you integrate all of you into the discussion so there are mics, they will be passed around so please wait until the mic reaches you please state your name and please be aware that this is being recorded so if you would rather ask a question later on without being recorded you can do that after the panel Hello, Martin Schmidt from the Center for Contemporary History I'm Martin Schmidt from Potsdam thank you for this wonderful talk and this wonderful perspectives on the question who will see internet I want to make two points two questions, first of all what is about infrastructure so you talked a lot about the application layer but below that there's a whole infrastructure, I just published a book about how in the 1960s the protocols of the internet and how they realized their interests it is called Internet in the Cold War Internet in Carlton Krieg so what about the infrastructure question, for example if you're saying that you think that for example there's 1% of the 1% owning the internet then I want to post the question do they also own like the infrastructure wonderful example is Facebook they don't own the infrastructure you bring balloons into the air bringing the internet to every part because they just can be cut off because they don't own the infrastructure the second thought is just if you post this question who will see internet 10 years ago the answer will directly be you so what happened to this wonderful thing that changed a way of dominance of Microsoft of dominance of certain big companies and this happened in the history quite often so I think, I'm not proposing this but I think it can happen again and so what about the thought about the hot new thing or next new thing wants to answer who identified the question how many comments that was I can answer part of the infrastructure question I'm not an infrastructure expert but I think Kate mentioned in the beginning of her talk there are so many levels to who owns the internet the balloons are actually part of Google Loans project and Facebook is the Facebook zero so they piggyback on the Telco infrastructure in countries whereas Google Loans does hot air balloons where they have Wi-Fi they offer Wi-Fi and they also have Google Cloud and Google Cloud and Google Cloud and Google Cloud and Google Cloud and the infrastructure question I mean it used to be owned by governments or at least in Europe I mean you had the Telco networks and then you had some of the IXPs this has all been privatized do you have new seat cables being rolled out to reach people in Africa the new seat cables I got turned put down Brazil want to put a seat cable in between Brazil and Africa to be less dependent on the centralization of the infrastructure around the US which so far has not been implemented so I think the infrastructure is also majority owned by companies I don't know if you want to say anything about that look I think it's I think I have a slightly different view I actually think that infrastructure studies have been extraordinarily rich in the last few years I think of the work of Nicole Starrosilski who I mentioned earlier but outside of academia artists are doing extraordinary work around infrastructure you could think about people like Trevor Paglin doing work with under seat cables we have artists who are doing really interesting work around thinking about satellites this is very much of the moment I think the infrastructure question is very much front and center the question then becomes what happens next now that we have these infrastructures that are not just highly dominated by a small set of companies but deeply captured by the intelligence communities what do we do in terms of thinking about these so called liberatory infrastructures that we would perhaps like to be the next new hot thing there are obviously activists who have been trying to do things like local area mesh networks that are somehow less captured there's also somebody here in Berlin who is developing, this is Frank Rieger of the Chaos Computer Club is developing a system thinking about how you block down stingrays, these are sort of police intercepts around cell phones at protests, he's currently developing a system that will actually shut those down so that you can make sure that if you're at a protest and you look at your phone you'll be like this is not a real network, this is a police phishing scam so there are people thinking about these questions I think in both sort of academic and very practical ways I think it's hard work but I think it's really important Zizi had a question we haven't answered the you oh I'm sorry I didn't want to my bad just very briefly you had the second question on 10 years ago we would have said you I don't think that we would have said that the question is who would have said that and the platforms would have said that you are ruling because they provide the infrastructures for users to take it into any kind of direction they want or developers so a very different crowd would have said you I think if you would have asked all of us 10 years back maybe we wouldn't have said platforms but social media networks but that would basically be something similar so the you has never I think been imagined by academics to be the big ruler thank you hi everyone this is working I'm Zizi Papakarisi from the University of Illinois at Chicago thank you so much for such interesting opening remarks and currently it's a wonderful job moderating and asking important questions and then answers that are so inspiring have me have a million thoughts going through my head that actually had to write my question down so I'm going to read it I wonder whether the real problem is that we're running on very very dated systems of governance that were built to run worlds that are very different from the world that we live in now and you kind of sort of see this the satisfaction with the ways in which people are responding to elections and referenda in very irrational ways to go back to what you were saying Kate in this past year so Cornelia is to return and maybe press on the question that you asked you know perhaps not Russia or China but is there a better form of democracy or democratic governance that we simply have not arrived to yet or in other words you know after democracy what or is AI perhaps a way for us something that's there to help us better form of governance Zizi what a beautiful and impossible question I love it I don't know the answer that's why I'm asking the future of democracy and AI done look I apologize for the sort of poultry ads that I can give you but I would like to echo that a lot of people are thinking along these lines right now and certainly part of the reason that I do the work I do is so that we can better inform AI systems to ensure that these kinds of democratic spaces can be preserved and who knows possibly augmented I think that is perhaps one of the great optimistic visions I think that will be hence my sort of passion for really building up research in this space because we don't have a lot of work to go on to answer that question we can imagine it but instead the imaginative space I think has been really dominated by a very sort of small group of people who are really just interested in the super intelligence and how we think about killer robot overlords I want to ask that question I want to ask how do communities get involved in designing systems, AI systems that might be relevant to their local areas that is something that we could think about in a co-creative way drawing on ethnographic practice sitting alongside leading edge machine learning that inspires me so I want to see what that would look like and I just think it's a lovely thing to work towards there's a question in the background Hello I just wanted to take up that theme that Zizi mentioned there in the sense that do we need to rethink what we mean by democratic control in the sense of what constitutes the demos in internet regulation because the theme that I think sort of carries through our planaries and through a lot of the panels as we talk about regulations etc and a lot of concepts that are very closely associated with the nation state which naturally isn't the category through which most of the internet operates either in its regulation or in its content so do we actually need to be more than communication scholars and probably slightly more social theorists or sociologists and engage with the re-engage with the question of globalization and the demos in a much greater extent I might have a provocative answer for this like how Caroline said who's the you in web 2.0 and who created this narrative I also think that the narrative about how the nation state is no longer the power holder in governing the internet is also a mantra that gets sold from a very specific economical point of view where last time I checked I still pay taxes to a country that actually does apply rule of law and that does have power and by ignoring the fact or underestimating the power they have it's actually buying into this promise that this comes mostly from the internet tech community so of course I mean I'm not an internet governance or policy person and of course there's a lot of challenges and it goes across borders but I do think that we have to really critically examine where this mantra comes from because last time I checked it didn't come from sort of the people or the governments but it came from somewhere else I actually had the declaration of independence of the internet from Perry Barlow or initially I wanted to quote that to you but it was completely unnecessary because we went down that road anywhere without doing that and we get to the point where we are now from a point where originally people have very lofty ideas about borders and taxes and all the things that you just mentioned no longer being relevant anyway questions otherwise I will just just hope that I always want to hear at the front and another at the back I will hello I'm Meredith Whitaker from Google I'm thank you you're yes I am good evening I think this is an interesting question but it's also pointing to some of the stuff all of the panelists have said which has been really fascinating if this question were pointing to some deeper truth it would be a lot easier to answer you have a linear relationship there is a ruler there is an internet there is then someone you can confront with the impacts of these incredibly complex and often unintentional systems so I think it would be interesting to me to rephrase this potentially how do we have agency over the capabilities that a proliferation of network technologies have enabled us to either benefit or not benefit from and how do we even know what those are and determine a relationship between what these technologies do and how we are impacted in our many different ways it would have been hard to fit on the program and I want to agree with that very quickly for passing to the other panelists it's a really comforting fairy story to think there is a who at the top or one company or even a small cluster of companies and if you could only get past that all of this would be resolved there is something much more interesting which is to think about well if there isn't this sort of monadic singular center of power if it's so disbursed, if it's so complex what are the ways that we engage with it and I think there are ways that's the really optimistic part but I think we've got to do a lot of work to get there sorry one way to think about engaging and to think about agency in this very distributed setup of actors is to think about maybe dissent and that can connect to the question that Lisi had before so how can we imagine more democratic infrastructures that are so distributed that are infrastructures that enable dissent that allow the different actors to pursue their different objectives forms of valuation to follow what they think is relevant for their own life without kind of interfering in what maybe other actors consider relevant for them or negotiating that on a level that kind of not only enables platforms to make sure they have enough kind of net value at the end of the year so the kind of dissent of objectives that is being enabled by infrastructures would be one way I would suggest to think about modes of agency in these distributed environments so I think now we've complicated all aspects of the title of the who, the ruling and the internet and there's a question in the back I hear so perhaps that question I'm here Felix from Queensland University of Technology actually German so the last time I was forced to give a little bit my data was by the Australian state during the census where I was asked like where was I five years ago, how much homework did I do that last weekend whom was I living with where I was the day before like whether I married or not and in what relationship I am with my flatmate and I had to fill it otherwise I have to pay a fine of I don't know I think it tops at $1,800 Facebook never asked me to do that and Twitter neither and still when I want to go on the internet and also that's the same for everybody else who has some technical knowledge I can be quite well still under the radar of everything so I'm kind of free, I rule this kind of internet I would still say that but you need kind of some technical knowledge and then I'm wondering on the other hand that why is it that the states that are the best at governing platforms are the states that I that I want to live in the least so and the third aspect that I want to raise is that if North Korea would be a platform the world would have one problem less because people just would not use it so that leads me to my question that may be might be that when as academics we start to talk about platforms in a kind of kind of alarmistic way that we say like oh, they rule all lives when actually they don't because they rely on us as their users which is kind of a democratic thing if we start like talking a lot about how to govern those platforms aren't we actually in danger to the direction we don't want to go and that's more regulation of something that's actually free to join or not by something that I cannot choose because I cannot choose my nationality well there's so many questions in there I'll just very quickly touch on too certainly in terms of the census and I hear Australia had a particularly rough time of it this year, apologies to everyone I had to deal with that I point to the work of Kevin Driscoll a fantastic article looking at the history of the census as a kind of like an early large-scale data technology what is interesting in your alternative view is that Facebook didn't ask you if you were married or how much housework you did I would actually say Facebook's a trying to elicit that information from you in a whole lot of ways sometimes directly sometimes indirectly and the other thing I would say is I'm sure you've seen the University of Cambridge study that shows that with quite extraordinary levels of accuracy I can predict whether you are married, what your sexual orientation is, whether you're a drug or alcohol user based on your Facebook likes so you're not necessarily liking anything related directly to those topics but they looked at how you can do these large scale correlations even when they're wrong these companies are building up those very detailed portraits of you so I think that is something that we need to keep in mind that even if you think you are relatively free that those kinds of data profiles are being built but I do agree with you that we have to move past this kind of user centric model where we have this idea that it's only through the user engagement that you are actually participating in these platforms I think it's much bigger than that I think I just want to add to this because I completely agree I think it's like so I badmouth Google a bit this talk but like states, Google is also not a monolithic entity there's so many, it's like it's not one company there's a lot of people working for it and each have their own individuality and I think the census you brought up is actually quite interesting because what you see in certain English speaking countries is actually that the UK, US and Canada that the census is done by the biggest arms manufacturer in the world so you cannot opt out and actually your data is being collected by Lockheed Martin who also makes arms and you even get a fine if you don't participate so for me it's not like oh the big five are worse than governments or oh I think the data society makes things very complex and the thing we need to do is unpack it and start asking the politics and the questions behind it because if we don't do it we just take them face value it's like Shell saying I take care of the environment the same like a big data company would say I respect your privacy, of course not it's a bread and butter so I think it's like making it more complex and actually asking the politics behind it and I do want to stretch one point that you said was like if you have some technical skills then you can fool the system which I highly doubt but that also means that you believe where if you have technical skills you become the elite and you become better than the rest so it doesn't matter that the people who don't have those skills will then get profiled so we create a whole new class system based on this and I think that's even more problematic so I think there's certain things we need to politicize and actually question yeah I think in the interest of time and given that it's drawing short I would give the last question to Jenny Stromagalli the president who raised her hand in due order to not question so the final question is yours so the conversation about internet leads to this conversation about boundaries which leads to these questions of governance and when I think about governance and how we go about doing governance that work of policymaking and legislation and rulemaking ideally those rules are driven by a set of values that collectively we want to uphold in the rules and the regulations and so maybe this is a happy way to end but I'd like to hear your thoughts on what the values need to be that underlie the regulations that structure and shape these systems I would start answering these questions in many different ways to answer it by values that are relevant that enable that people are not being discriminated by the effects of algorithmic processing AI and so forth that these kind of processes enable people to participate in various forms of society without necessarily being kind of discriminated on choices that they have made in the past that would be one very important thing and that requires basically a very kind of detailed understanding of how these kind of discriminatory effects are being kind of enabled but a very specific form of transparency because by just looking at AI or at algorithms you don't necessarily know exactly they will decide in a very kind of situated context in relation to data of specific users so understanding how these kind of effects are being produced I think is absolutely important to make sure that yeah they don't constrain participation I love this question I guess I would just briefly add that I think that all of these systems that we've been discussing tonight from sort of the infrastructural layer to the whole range of sort of super layers of machine learning and beyond come with them a particular type of values sort of baked within how they work the minute we ascribe the belief that you know with more of these kinds of data systems we get close to what people really want and we get close to the truth we get close to objectivity we've taken a perspective that has a very certain value with it I think I'd be really interested in asking what kind of values drop out of those systems what kind of values don't fit within the easy metricization of human activity and I think there we can actually turn to you know some really interesting work in feminist theory in race theory in a whole lot of work that's been done in the 20th century to think about who gets to make the values and how we might think about participation in different ways the only thing I would like to add to these values is also the collective the social it's no longer about individual because my actions might exclude somebody else now in the data society so also like how can we see how can we take responsibility for a larger collective good where everybody believes people should have access to healthcare education all these things and these are exactly the things that get sort of deteriorized in the data society so how can we also as our individual values make sure that the collective is also yeah thank you so much for offering a glimmer of hope that we will be part of ruling not just of being ruled so thank you so much and please give a warm round of applause for the panel who I think have done a stellar job and so what remains for me to say is to you please stick around have discussions about who rules the internet or other things that you find interesting and enjoy the reception that we have outside and the drinks thank you so much for coming