 is a postdoc at University of Liverpool in media studies. And one of the things, just by way of introduction, one of the things that I was interested to sort of observe as I came to MIT, compared to media studies, not as a media scholar, but as a practitioner, was all the ways in which the field here, certainly at MIT, was rooted in the work of sort of communications engineers, that many of the roots of some of the media studies were in the work actually of engineers. And I particularly think about that in the way that the concept of single and noise, which is so relevant in our work, sort of had its origins in that field. And what's so intriguing about Alder's book, to me at least, is the ways in which she points out that media scholars have tended to focus on signal. And so she's taking an interesting look at the noise. And the other thing that I'm struck by, Alder, in your work is the focus on sound. As a metaphor, the ways in which we use metaphors around seeing, and you're suggesting that we use more metaphors around sound. So I'm really intrigued to hear what you have to say. I'm sure everyone else here is too. So with that, I will simply hand the floor over to you. Great. So thank you very much, Scott. I'm going to do the share screening part before we start. So thank you very much for this introduction. And as you said, the timing of this book was perfect for the apocalypse. And I had my ugly cry share of not being able to actually come to Boston. But thank you so much for giving me this opportunity to present to you. And I'm really looking forward to the debate. I have kind of like an hour and a bit more, but I will try to shorten the presentation so that you can have more questions. And I'll just start. So hi, everybody. And I'm going to do a talk about my new book, Media Distortions. And you can download the book, It's Open Access, and you can even have a playlist with dark motives and things like that. And you can see things that didn't come into the book. So please go into that website. A bit about me. I like to go to Comic-Con, as you see here. I thought that it would be suitable to show you how I always think about different kind of bots and different kind of deviant things. I'm a researcher, activist, feminist. And currently, I'm working on several projects which sort of continue what I'm going to talk about today around data literacies. I'm going to talk about that a bit more towards the end of the talk. But I also want to talk about my background. I used to be a radio broadcaster of Psychedelic Trans. I used to edit television channels. Used to be a journalist of electronic dance music culture and write about that. And I wrote a book. My previous book was about the Israeli Psychedelic Transcultures. So for me, sound has always been a way to think and examine things through. So I think this is quite important into sort of the introduction of this book, Origin Story, if you may. So why focus on the deviant, you ask? Which is I think something that, for me, I always was very attracted to that. So even if you think about the Israeli Psychedelic Transculture, that was a culture that was sort of deviant in the Israeli culture. And I was always intrigued in how different kind of things get categorized as deviant. So for me, part of the power of examining things that are a bit in the outskirts, a bit called deviant, is to really understand the politics of drawing these kind of boundaries of what is deviant. And by understanding what is the deviant, we can understand what is the norm much better. So we can, in my book and in my research, I always try to understand who created these kind of categories of what is legitimate and what is illegitimate. Why and with what rationale did they do that? Who does this category serve? And how do these categories affect the way that we engage with media and technology? I also question what is a media phenomenon? So when I started my PhD, which this book is based on, a lot of people said, oh, but this is marketing or this is advertising or this is computer science. And actually, people thought, oh, we know what is spam. It's like emails about Viagra and Nigerian princes. And for me, one of the main maybe points to take from my book is to question these kind of things that we take for granted, whether it is different kind of terms that we think or different kind of definitions of things. And as media scholars, I think it's really important for us to not take for granted what computer scientists tell us that things are, but also what marketers or advertisers tell us things are. So this was kind of my standpoint. And also, of course, to examine these kind of boring things or these kind of things that are common sense or taken for granted. So in the book, I look at, I examine three case studies. I'm going to focus on one today. But I encourage you to read the book and you'll see all the others. And what is really important for me is that when we're talking about these kind of deviant media categories, they keep on changing and evolving. So I focus on noise in the early 20th centuries with Bell Telephone as one of the biggest media company of the time and how they sort of structure different kind of territories and people's behavior. Then I focus on spam, which is sort of the end of the 90s and early 2000s. And then I focus on Facebook and how they categorize anti-social behavior. So what is important for me to say is that I think some of the problems for us as media scholars is that there are media research objects keep on changing very fast. And I think one of the things to take from these kind of things is the bigger questions. We're probably some of us are doing research on Facebook, which hopefully wouldn't exist in a decade. But what do we actually take from the questions that we want to ask within Facebook? And this is something that is also really important for me to emphasize that media is going to come and go. But the kind of larger questions of how different kind of categories shape our behavior is something that's really important for me to examine. So in a nutshell, my book is about media power. However, most of the times when we think about media power and most of the sort of the theories that we have around it, and especially when we talk about the internet, but also outside the internet, is we just take that Michel Foucault's Panopticon, which uses very visual concepts about what we can see and what we can see. But also what's really important about the Panopticon is architecture has a huge element in this kind of media power theory. We also have a lot of different kind of terms. When we talk about these things, we talk about vision, invisibility. When we talk about algorithms or AI and different kind of things like that. And of course, let's not forget Frank Squalor's book about the black box. And what I sort of realized as I was examining my research, and if you see, just now I almost said I saw. So it's kind of crazy how ingrained these kind of visual concepts are in our terminology and how we think and how we explain different kind of things. So when I was writing the book, I had to change different kind of boards into sound concepts that made me realize how ingrained it is and how we think and engage with it. So these are the kind of things that I felt that are missing within the visual frameworks, which is when we're talking about these kind of power relations between media companies, we're talking about multiplicities. We're talking about multiple actors, multiple spaces, multiple time, multiple purposes of conducting listening and rhythms, which I'm going to talk about shortly. And different kind of architectures. So what vision doesn't really allow us is to sort of go between the boundaries of spaces. And usually when I do this kind of talk, I do this kind of experiment where I show that if I'm going to shout, it's going to pass the walls. But my vision is kind of constrained within time and space. So the theoretical approach that I developed is influenced by multiple other approaches, such as media theory. I'm going to talk about it a bit later. And the science and technology studies, software studies, feminist techno science, critical legal studies, and of course, sound studies. And it's really important for me to emphasize that I used grounded theory, which means that I didn't really assume that I know all these things in advance. And it's sort of these kind of things came up as I was examining the material that I was doing research on. So these are the kind of sonic epistemologies. And I'm going to come back to that a bit later. And for the people who are reading my book and maybe noticed, each chapter, each of the sort of the case studies, is divided to the first half of the book is dedicated to the sort of the structuring of territory, which is the media. And then the second part to the process listening. So what I'm trying to say with these things is that I am taking different kind of approaches. And I'm showing how power relations are constructed with these two concepts, which I'm going to talk about shortly. And I'm going to start with the first one, process listening. When we're talking about science and technology, most of when we're talking about knowledge production, we usually hear about theories that use vision concepts. However, there are sound theories who have been using sound as a way to sort of understand and to produce knowledge. So one of the main sort of theories that's been produced was by Alexander Saper and Karin Weisterwald from the Netherlands. And they're talking about different kind of practitioners who produce knowledge. So for example, doctors, when they listen to you with a set of scope or car mechanics who listen to the car and how they make different kind of diagnosis in order to understand what's happening with different kind of bodies. So they make this kind of classification of different kind of modes of listening and how these kind of practitioners make their assumption and then make different kind of claims whether a body is healthy or is it a malfunction or different kind of things like that. What I notice when we're talking about the online environment or a mediated environment, these kind of modes of listening are not enough because we engage with different kind of environment. Therefore, I developed a new mode of listening, which is called process listening. It is a mode of listening whereby practitioners who can come from different kind of professions and interests listen to different kind of sources with different kind of tools in different times in order to produce different kind of knowledges. And by knowledges, I mean different kind of profiles. So for example, when Facebook listens to our behavior through different kind of cookies and pixels, they create a different kind of profile on us and then they can rearrange the platform in different ways to make interventions in our Tempo-Spatial experience. And this is what I mean by Rithmedia. Now, when I was talking about, when I was trying to examine how different kind of media companies shape the way the different kind of information is flowing or not, I realized that people use different kind of concepts like flow, data streams, data traffic and channeling. And what I realized is that these concepts don't really explain to you the politics behind how different kind of information or different kind of connection is made possible or un-possible to us. So I was very influenced by, as I said before, different kind of media theorists, especially Raymond Williams and his concept of plant flow. I was also influenced by feminist techno science, by their notion of process and of course, Henry LeFevre rhythm analysis. And basically what all of the combined concepts are saying is that when all of these companies, whether it's Facebook or I'm gonna shortly talk about the online advertising industry, when they listen to you through different kind of instruments in order to create a profile, they then create different kind of architectures that are changing according to that profile. So the way that we engage with platforms have a different kind of ordering rhythm, which is influenced by different kind of political decisions that are usually influenced by advertising logic and obviously money. So unlike, for example, if I'm a doctor and I listen to your body in one session and that event has a beginning and an end, when different kind of platforms or web and online advertisers listen to our behavior, there isn't a beginning or end. So for example, if Facebook listens to me, it's not like, oh, now I know everything there is to know about Eleanor by the 22nd of October, 2020 and I don't need to listen to her behavior anymore. Because there is an ongoing process of listening to my behavior in order to have a richer profile. And then this profile is then helping these kind of companies to create different kind of architectures that are arranged in a specific way to make me engage more or to make me click on a specific ad or to make me maybe sort of comment on different kind of inflammatory posts and things like that. So with this media, what I'm trying to say is that the way that media companies reorder different kind of components in a way that orchestrates desired rhythm. So they, with this kind of ordering, they decide what is sociality. While filtering out problematic rhythms which they defined as either noise or spam or antisocial behavior. So these kind of practitioners, they conduct the way that mediated architectures change according to the knowledge that they gain from process listening to people's bodies across multiple spaces. Now, if this sounds a bit confusing, then don't worry, I'm gonna go into one of my case studies now. So in the end of the 1990s, and I just wanna say that I decided it's really hard, when you write a book and you wanna talk about everything and you don't really know what to decide to focus on, I really wanted to focus on the standardization of web metrics, which is the end of the 1990s and the beginning of 2000s, because I think that everything that we experienced today around profiling, around fake news, around disinformation and misinformation. And basically the problematic and broken online ecosystem where we are the product started in those times. So I think a lot of the times, people talk only about the last decade, but we're talking about processes that happened probably 20 years and probably even before, but I think it's really important to identify key moments where these kinds of things, which is basically the surveillance of our online behavior have become normalized. So in this case study, I'm basically looking at how did the standardization of web metrics happen, which means how to measure different kind of behaviors in order to trade them in an efficient way. So as you can see here, the IB, which is the Interactive Advertising Bureau, I wanted to standardize different kind of metrics, because in those times, if you remember, the internet wasn't really, the web wasn't really a technology which was sort of that people knew that it's gonna succeed. So a lot of the time we take for granted that the internet is succeeding, but actually in those times, the kind of subscription model didn't quite work. And with the dot com bubble crash, people didn't really know what to do. And therefore, different kind of specific companies managed to survive the dot com bubble crash, including Amazon, and then different kind of platforms started to emerge, which gave a free service, whereas, you know, as we all know, it wasn't quite free, but we were the product. So let's see what happened in those days. So with this kind of the IB wanted to actually understand how can we standardize different kind of measuring units? How can we standardize how we measure people's behavior in order to make this kind of product, which is our behavior as efficient as possible? Because if we're the product, then the kind of the currency needs to be agreed upon with all of the actors that are involved. So as you can see here, these were the most sort of common metrics. And what they basically wanted to understand is how are they gonna measure it through different kind of angles? Will it be through the ad serving? Will it be through people's computer? And what will actually count as a click? What will count as a total visit? And what will count as an ad impression? Now, for me, one of the most interesting thing that was part of this research was to actually analyze different kind of internet standards through the IETF, which is the internet engineering task force. And also I analyzed different kind of legal documents in order to understand how did they actually decide what's happening there? So when I was starting to read the cookie standard, which is what you're looking at here, I started to realize that actually what's happening in the back end is that when you visit the website, you through different kind of default settings, you are being sent a lot of cookies, which could be dozens of cookies, could be hundreds of cookies, in order to basically send different kind of information about your behavior. If it's from the website that you type on, that would be first party cookies. So that means that, you know, if for example, I look at the Guardian, then it will be a first party cookie that the Guardian collects information on me. But if it's third party cookies, it could be ad exchanges or a different kind of data brokers who are also gonna sort of listen to my behavior across different kind of platforms and throughout time. So what I basically saw here is that there is a standard and we're actually not really aware of what's happening. What was really interesting with this standard is that the people who suggested it at the beginning said, well actually maybe we will show part of the standard will be to show to people what's happening in the backend because people don't really understand that all of these things are happening. And the advertising industry at that time said, no, no, no, it's gonna confuse people. So let's just show them, you know, the front end as computer scientists call it, so that it wouldn't confuse them to see what's happening in the backend. But actually when we think about cookies, and one of the things that struck me is that all of the people who are talking about cookies, whether it's computer scientists or even media scholars, they call it just a text file. But this just a text file is actually a communication form is what cookies actually do is to plant different kinds of text files. But every time that you make these kind of, whether I'm reading an article or I'm clicking on things, these kinds of things are communicated to different kind of entities. And those can be either, as I said before, the Guardian, if it's first party cookies, or it could be other data brokers or different kind of agencies, which we are not aware of. So this is when I was talking about sort of not going into the common sense or the common way of defining different things. This is one of these moments where I realized that actually, you know, we've been sold that cookies are just a text file. This is also part of a legislation that's going on behind it. But actually it's a form of communication because they're communicating different kind of topics which could be my gender, the kind of device that I'm using, the kind of broadband that I'm using and different kind of things throughout time. So one of the other things that the advertising industry wanted to do was because there was all of these kind of robotic behavior, they wanted to understand who is human and who is not human because it was really important for them, if we are the product, it's important to have exact measurements. So what they did was to develop this different kind of measurements and what they call filtration in order to filter who's human and who is not human in order to make accurate measurements of what is happening so that they can treat us. So as you can see here, they developed different kind of, so for example, the basic method which was a text file of robots. The other one was identification of specific suspicious and non-human activity which was a different kind of list that you were supposed to send the IAB. And another thing was to analyze the rhythms of user's activity, which I found really interesting because what they were basically defining there is what are the kind of behaviors that can be considered as robotic and hence non-human and what are the kind of behaviors that can be defined as humans? And as you can see here, these are the kind of definitions that they had of what is a robotic behavior? So sort of what did they analyze in order to establish that? So they established users performing multiple sequential activities, users with highest levels of activities, users who acted in consistent interaction attributes and other suspicious activities. So what they were doing with that is basically not only to try to understand to measure people, but also to define what is a human and non-human behavior? And for me, actually one of the first times that I realized that was in my first book when I published it, all of the people that I interviewed, I invited them to come to a launch event so that I can interview them. And I sent them all the same message on Facebook because I was frankly a bit lazy and I just changed their name and after a while I didn't hear from a lot of them and I was really offended. I was like, I interviewed you as you can at least tell me yes or no or something. And then one of them who works in the music industry told me, oh, did you send the same format of message? And I was like, yes. And he was like, oh, well, Facebook then will think that you are a bot or a spammer of some kind. And then it will send your message into the other folder, which could be the spam folder or the junk folder because they analyzed your behavior and this is a sequential activity. And with the same activity, you maybe just change the name. So that was something that triggered my sort of thought of like how these companies are making these kind of decisions of which kind of behavior is legitimate and which kind of behavior is illegitimate and then shapes our behavior accordingly because what he told me that label manager is that he of course has the same kind of messages. He changes them in a way that will try to avoid this kind of being called the spammer. So this was the sort of analyzing the internet standards part. Here I'm showing you how in the European Union, when they were trying to make legislation around the electronic communication that was starting to be more popular at the time, how did they actually define that and how did the advertising industry you lobbied the European Union, how did they manage to bypass that? So what you're looking at now is the spam article from the electronic directive from 2002. And the way that they define it is the use of automated calling and communication system without human intervention for the purposes of direct marketing and may only be allowed in respect of subscriber having given their prior consent. If you're European, you probably know about the prior consent, I'm gonna talk about it shortly. But we know that there are a lot of different kind of activities which we consider as part of that. So for example, if you remember the horrible spam attack that we got YouTube and into our computer without being asked without our prior consent. And then when Apple was talking about they said, oh, well actually with article two of that law, it means that once you have some kind of connection with a company, then that makes it an informed consent. An informed consent or informed or implicit consent means that if you have any kind of interaction with this company, then you are in a kind of weird relationship where you can, you are allowed to send these people these kind of marketing things. And during the end of the nineties in the beginning of the 2000s, there were a lot of debates of whether cookies should be legitimized and what's happening there. And this is an article that I really liked from 2001 on the left hand side where the chairman of the IB United Kingdom says cookies have been branded as spyware tools or some kind of subversive software, but it's what we use every day. And I think it's quite telling that this is 20 years ago where even then people understood that cookies and pixels and all these things are basically spyware and doing this kind of surveillance over our everyday activities. But the kind of normalization that it happened throughout the years, I think is something that we need to remember that happened with the kind of the lobbying of the advertising industry that managed to make it seem as if it's nice and friendly, even with the name when you think about cookies. So actually, according to the law, both cookies and spam are the exact same thing because both of them are sent through automated systems. They are meant for direct marketing and they are sent without our prior consent. But it was really important for the advertising industry to make a difference between legitimate advertising practices and illegitimate advertising practices in order to standardize what was starting to be the online data broker, online ecosystem that we know today. So part of what we understand and we know today is the real-time bidding, which is happening in the backend of our screens and we don't know about it because as I said before, the advertising industry lobbied for us not to understand what's happening in the backend. And basically Facebook and a lot of these companies, including Google, they basically took this standard of real-time bidding and developed it into their own different kind of systems. So when I was talking before about these kind of, when I was talking about why should we use sound concepts is exactly these kind of multiplicities of actors who listen to us throughout time. So we have so many companies who are listening to our behavior in the backend. It could be Facebook, it could be Google, it could be Amazon, it could be your government, it could be different kind of data brokers that we're not aware of. And these companies are then trading these data without our understanding or consent. So as I said before, this was part of the politics around that is to create these kind of architectures where we don't really understand what's happening. And part of the sort of one of the ways that the sort of the people who developed the standard said, oh, you know, we're not going to make it visible but people can still change the cookie setting in somewhere in the setting of the browsers. And this is part of the politics that I'm talking about arranging the architecture in a way that it will be difficult for us to actually engage in it in a meaningful way. And I thought that it was quite funny because today when I wanted to post sort of advertise this on my Facebook, I used a different kind of browser and then Facebook asked me if I want to accept all cookies. So if you look on the left-hand side, we have a lot of choice here. We can either accept all, which Facebook obviously wants us to choose because it signals it in blue. And then if you want to manage the data settings, you go to the picture in the right side. And as you can see, the only way that I can engage with Facebook is by accepting cookies. There is literally no way for me to decline, to negotiate, to say, maybe I want these cookies, maybe I don't want those cookies. And for me, this is part of what I'm trying to say with my book is how these kind of companies structure specific architecture where we basically have one way to experience them. And Facebook, of course, is not the only one. We also have Google and Amazon and all of these companies. But I think it's really important for us to understand that we are presented with one way of experiencing them. And then we don't see what's actually happening in the back end. So what you're seeing here, some of these devices do not exist anymore. The light beam, which was connected to Firefox in the middle, on the left you see Privacy Badger and on the right is the UK sort of ad blocker. And what I wanna show here, I don't know if you managed to see that there are hundreds of cookies that are plugged into my device. And this is, I think for me, one of the most amazing things, one of the most amazing exercises I usually do with my students, of advertising students who don't even realize what is happening in the back end. And once I showed them that, most of them are quite shocked. And I think a lot of them are like, why didn't we know this before? And how can we do something different? And I think for me, this is again, part of the bigger politics where our computer screen or our phone screen creates this kind of divide of power relations where in the front end, we get a very kind of pristine architecture and interface where we can engage with these platforms if you're living in the EU, you might get all of the choice of pressing, I accept, I agree or okay. But then in the back end, we have a whole other online market that is trading us in milliseconds. And so this kind of the boundary of our screen is not only with what's happening in terms of, it's a division of what is human and what is non-human also, because what's happening in the back end is something that we can't even comprehend. So once we upload a certain webpage or Facebook, these kind of the trading and the bidding on our profiles happens within milliseconds. So our screen creates this kind of estimate, the boundary of the asymmetric power that these kind of big technology companies or the data brokers have all of it. So what happens is that these kind of ads keep on chasing us. And this is, I really recommend you to see, South Park, they had a, I think in the 19th season where they were criticizing online ads and sort of how it kind of chases you. And from my research that I'm doing these days where we try to sort of understand people's data literacies, one of the things that we keep on getting from people is that not only do people don't really understand what's happening with their data, a lot of the time when they're expressing these kind of concerns and fears from advertisements is that they keep on seeing the same ads and in a certain amount of time, they say, okay, then after I'm seeing it so many times, I'm gonna press that ad just so it would leave me alone. So I think what we're seeing here is these kind of ordering of the architecture and what we kind of can engage with in a specific time in space and how that basically shapes our behavior. And that can also be in the shape of how different kind of companies push us different kind of disinformation and misinformation. So I think only now we start to get this kind of critical debates of how Facebook keeps on, pushing different kind of problematic material, but actually pushing problematic material from Facebook is part of their business model because the more emotional material will be on their platform, the more that you will engage. So it doesn't matter for them if it's a picture of your family in an awkward position or if it's disinformation or conspiracy theories because it's all part of fueling more engagement and more comments between people. So I'm gonna push more towards what's happening these days and sort of to show you that today activists are pushing and I think that there is this kind of reckoning of this kind of power relation and what these companies are doing. We're seeing the FTC wants to start investigating real time bidding these days. Actually, I think last week or a week and a half ago activists showed that the whole consent framework is actually flawed of the ad tracking industry. So we are starting to see a lot of changes but the main problem is that the way that we experienced the internet today is that these companies want us to feel that this is the only way that we can experience these platforms and it can only be through my own profile that not many people, for example, can use one profile or that it could only be with cookies or it can only be through this kind of ongoing surveillance. And so for me, one of the main things about sort of peeling off these kind of layers of politics and how these companies have lobbied and tried to create these kind of architectures is to show that actually all of these things were strategies that were ongoing by these companies. So that also means that we can also build different kind of platforms and different kind of things in a different way. But in order to do that, and this is sort of relates to what I'm working on now is that people actually are not aware of what is happening in the back end. And as I said, this was a planned strategy, but people do not understand the political economy of the internet. So they don't know, most people do not know that Google and Facebook are funded by mostly that their main income comes from advertising. Most people don't understand how algorithms profiling, web cookies, most people don't understand how they work. I can even say further that I'm doing Facebook this month and when we ask people even what is basic thing like what is data? What is the thing that these companies maybe know about you and things like that? Most of these people have no idea. So I think that in order to create a change in order to understand what is happening and even more so in order to demand different kind of future, we actually need to go to the basics and actually explain to people what is happening. And I think unfortunately a lot of people have seen the social dilemma and keep on texting me and saying, oh, you should see that, even my mom tell me, oh, only now I realize what's really happening, which is good and bad because obviously that documentary could have been much better. And I have a whole Twitter rant if you wanna see that about that. But I think what it actually shows is that the way that we communicate that, whether scholars or journalists or activists, we kind of assume that people know about these things. And a lot of the time we kind of talk from a very sort of top down kind of approach. And I feel that we actually need to go to the basics and actually explain to people what is happening in order for them to demand different kind of platforms and to understand what are the consequences of that. So I actually wanted to show you part of that and this is part of their research that I'm doing right now where we ask people on the left-hand side, you see how much do you agree that companies would use their personal information to personalize your experience? And you can see here according to different kind of groups, more or less they don't really agree, but some of them do. But when we actually ask them if they accept that companies would use their personal information to track their behavior over time, you can see here unanimously that most people don't agree with that. And the fact that people don't make these kind of connections that personalization and tracking over time is the same kind of thing, is actually a clear sort of evidence that shows that people actually have no idea what's happening and they don't make these kind of connections. So when you don't know what's happening, you can't really demand that you will have a different kind of future. And while I believe that as academics, a lot of us are doing really important work in sort of advancing the debate, we actually need to go to the basics and to make people understand. So I'm gonna switch that and I'm gonna, I think, go straight to the end and say that deviant media categories are about the struggles to determine what is human, what is normal, and what is social. It is about what makes us as individuals in society. It is about the default settings of our lives. And if we wanna change the default settings of our lives or at least to have several options of default settings, then we need to sort of peel off these kind of strategies that were constructed throughout time to make these divisions of what is deviant and what is not deviant. So I think I even managed to finish before time because I was talking really fast. So there you go. I'm gonna stop sharing. Great. Thank you very much, Eleanor. That was fascinating. Let's see whether right off, whether we have any questions. I don't see a hand. I guess one of the things, I'll start with a question, which is, I know some of your work, you looked at sort of practices sort of outside of norms. Are you seeing media, are you seeing communities of attempting to sort of circumvent these systems in their media practice in any interesting ways? I think that a while ago, I saw that several teenagers are using the same Instagram account in order to bypass different kind of profiling. And I know that a lot of activists are also doing stuff around that. But I think, you know, there's a lot of different kind of people who are trying to object in other ways. So we have Max Schrems, for example, who is an activist who's trying to change things from the legal aspect, which I think is quite important and crucial because, you know, you need to have several kind of battles going on at the same time. I think people are also trying to create different kind of alternatives to Facebook and, you know, and Twitter. I don't know how effective it is, but I also obviously don't think that there is only one way, if we want to change the way that things are, I think that we need to go through multiple directions. So I think, you know, on the one hand, having more sort of different kind of education and data literacy program is one way. Another way is to try and change the legal frameworks that we have today, which are completely not equipped to combat these kind of companies. I don't know if you saw a couple of days ago that the Ethic FTC is gonna have a huge antitrust against Google, which is great, but what is it actually gonna do? Is it actually tackling the main issue, which is the business model? No, it's just saying, oh, maybe we'll, you know, dissect Google into even smaller pieces. And also, as you can see, all of these sort of measurements are coming after a really long time, which these companies already managed to cement themselves as a huge and inseparable part from our lives. So for me, one of the main things is that actually, we need to have more public spaces, which could potentially come from a government funding, it could come from maybe have an internet tax or different kinds of things like that, where we will have other options than the big companies. It's not a perfect solution, like any kind of solution, but it could be a start rather than counting on Google or Facebook to do everything that we need basically today. I hope it answers the question. Emily, did you? Yeah, yeah. So I have to admit, I've only watched, I think, about half of the social dilemma, but I'm just curious to know if you had the platform of movie making to spread your message, how would you have changed the way the message was delivered to people about these things? So if you could recreate the social dilemma in your own terms, what would you have wanted to depict? I'm happy that you're asking that, Emily, because I'm just planning to actually submit to Netflix. So if Netflix is watching this, I'm open to get all of the funding that you wanna give me. But no, but seriously, I thought about that. I think one of the main issue for me in that documentary is several things. First of all, asking the tech bros to answer the problems that the tech bros have created for me is a huge issue. There are a lot of smart people, both in academia, both underground, activists have been dealing with these things for years who haven't been asked. And another thing is that although I like the US and I think a lot of amazing things that are happening in the US, these companies are global. So if we keep on only focusing on the US issue, it sort of recreates the problem, right? Because a lot of the standards of these companies, when we think about content moderation, a lot of the laws that sort of govern that comes from a US focused and US centered mindset. And I think it's quite important to have different kind of perspective of how these kind of companies shape. I would definitely also focus on Chinese companies and Weibo and different kind of things like that. And also show, I think, more historical perspective. But I think that, yeah, I would definitely also not do a one film documentary because I think if you're trying to shove so many things in one program, that sort of means that you're losing a lot of things. So I am thinking of doing a series. And yeah, and just to consult with people who have been dealing with it in different parts of the world and to get a richer understanding of how these technologies affect different kind of communities, different kind of regions and not to necessarily assume that all of the people who actually created a problem can actually have more insights than people who have been dealing with it in different kind of ways and aspects. That would be my approach. And again, Netflix, if you wanna, I'm available on email. And yeah. If you need any petitioners for this series, let me know. I will. No, I'm serious. I was actually DMing a lot of people this week saying, I can't stand having all of these documentaries and having my mom tell me, oh, did you see the social dilemma? It's so important. I learned so much. And I was like, yes, mom, but so, you know, I think this is part of something that I think that academics need to be, we need to communicate more because I think there's a lot of really important and great work that's happening in academia, but a lot of the time it's really difficult for us to communicate it. So I think that we need to have better routes to communicate stuff. You know, a lot of the things that are happening with Facebook, we've been talking about them for a decade, if not even more, right? And everybody keeps on being surprised every time. And actually, so that means that basically we need a better communication channel between journalism and a way to communicate with everyday people in order for them to relate with what we're talking about, basically. Are there, yes, Vivek? Thank you, Eleanor, for a really great talk that taps into all of my existing paranoias and having been involved, I was involved in the like Web 1.0 really in the 1990s as a HTML coder, right? And so that I know that all these conversations, all these concerns have been there ever since the beginning, right? But that they haven't been, as you're saying about the social dilemma, that these voices have not been listened to. And the people who are in that film knew all of this too. I mean, you know, because it was all there, right? Yeah. But my question is actually about, it goes to some of the work that you've been doing, the interview-based research that you've been doing. And you showed us one slide with people, whether they are okay with their information being tracked in these different. Yeah. And I guess, you know, part of what I'm curious about is whether you're also approaching those questions in terms of trade-off and what people are willing to trade off. And I'm thinking about this in terms of the really, you know, draconian surveillance that followed 9-11, right? And for those who were in the communities being surveilled, it was a horrific moment and, you know, continues to be so, because it kind of shifted the way that the state is allowed to surveil different communities. But for a lot of people who are not part of the surveilled communities, it was this question of trade-off. Well, if I'm safer, I don't mind if I'm being tracked and surveilled because I know that I'm not the person who is being targeted, right? And so actually this is for my own benefit. And so in the case of the logic of, the logic that has been sold about cookies and other kinds of tracking devices is that, you know, this is for your own good so that we can, you know, we can give you, serve to you the kinds of things that you wanna see, the kinds of products you wanna buy, et cetera. And it's just a matter of, you know, you just have to trade off a certain amount of privacy for this benefit. And so I'm just wondering about like how that plays into or has played into the kinds of conversations or interviews that you've done with people. I think it's really interesting what you're saying here. And I had a focus group actually today and a lot of people said that, you know, that they were concerned about different kinds of privacy related issues. But then with, you know, the pandemic happening, they didn't really feel that they have a choice. And I think that the non-choice factor here is quite huge because a lot of services that we have today are digital. There's actually very few services today that you can do non-digitally. So there's actually not really a choice here. You're not really being asked if you can or cannot do things. And what a lot of these companies did, they basically changed the nature of this kind of contract with people, right? So you have these different kind of terms of use or different kind of contracts, basically, where they say, what are they gonna do with your data? I'm probably one of the few people who actually read these terms of use when I can, not always because I tried to have a life. And, you know, that sort of creates, again, these kind of asymmetric power relations because what are you actually trading? Do you actually know what you're trading? And I think a lot of the time, you know, we're being told that we should feel safe or, you know, that this is okay, but actually it's not only to sell you different kind of data and we can't, it's really difficult to trust these kind of companies because as time goes on, we realize that actually, you know, I thought that I'm giving my data just to get to the Guardian, but actually I give it to the Guardian and I give it to a lot of other data brokers maybe are then gonna sell me a problematic, maybe life insurance or different kind of things like that or maybe it can harm different kind of, you know, job opportunities or things that we actually can't really understand and predict right now because one of the problems is that this data, as I said, with process listening, one of the main things that I wanted to emphasize with that is that it's an ongoing process. So I actually have no idea who has my data, how much time were they gonna keep it, how are they gonna use it, for how long and different kind of things like that. So I agree with you that, you know, if it was in a fair world or in the previous world when we knew that we're making this kind of transaction or kind of a way to say, okay, I'm just gonna give you a bit of my data and then you're gonna do it with that. But actually, one of the things that I'm trying to argue in the book is that there's actually no negotiation here, right? Like you have one way, as you saw with Facebook, I can only press accept all or I accept. And why actually can't we negotiate? Why can't I negotiate individually with each of the platforms? What do I wanna do? Why can't I have a bar on the side that I can see which kind of data is going on? And believe me that I'm not gonna get confused because I'm sure that all of you at this moment have at least 20 tabs open with like a bazillion other things happening. So I think that we need to think about these things differently. And I agree with you that while some people think that it's great, I'm gonna get personalized, I'm gonna get maybe a gin that I always wanted to buy. And occasionally I do get like really good ads but I can't even know what is the trade-off here because it's so opaque to me. I have no idea what's happening in the backend. I have no idea who is involved and that is what's troubling to me. And what's troubling to a lot of activists and scholars is that this kind of like, this kind of a screen that basically separates between me and all of the other companies that can listen to my behavior. So what I do agree with you that sometimes the trade-off is okay. The sort of all the possibilities that it could be misused or abused is very high. And so we need to both think about how we're gonna regulate these companies but also how we create a fairer kind of trade-off here. Actually, maybe I do want to negotiate with you just like I wouldn't have an open-ended contact with my landlord who maybe tomorrow decided that they wanna crash in my place in the living room. So these are the kind of things that I'm thinking about. I hope it answered. Yeah, thanks and I agree with you. My question was more about the, how do we change a kind of broad acceptance as part of the activist work? Yeah. How to change or create a broad consciousness that the trade-off is no trade-off? So again, it's about data literacies and it's about these very basic things. Unfortunately, the way that media and computers are being taught, we're not really being taught about basic things like the online economy, what are cookies and what are all of these kind of things. And part of the project that I'm working on now which you saw before, which was the survey that we conducted and now we're doing focus groups is to create these, it's called Mean My Big Data, it's Newfield Foundation Funded. And part of the things that we're trying to do is to try to understand how different kind of groups, what do they actually understand and then how can we design education material that actually is tailored to different kind of people. What we saw with the survey at least is that people's data literacies are very much influenced by education and socioeconomic condition. So the more educated you are and the richer you are, the more you know what's happening with your data and the more you know, for example, about privacy settings and what to do there. So again, we're seeing this kind of, how these kind of platforms and this architecture actually harms the people who are the most marginalized. So the answer is data literacies in a nutshell. We have a couple of questions and or comments in the chat, but I could read them but I'd rather if people... What is the difference between the chat and the Q&A? The chat is from the panelists and the Q&A is just from the attendees. But so I wanted to, and I want to get to all of them but I wanted to first give people an option rather than having them out loud. Abby, did you want to make your statement yourself before I read it or what's your preference there? Sorry, I have just done an entire day full of zooms so I apologize for my video being off. I'm happy to read my statement. I'll also turn it into a question actually because I sort of wrote it, Eleanor, before you said that you would love to make a documentary for Netflix yourself. So... A mini series, a mini series. A mini series, not just a documentary, I think big. Yes, it's all the rage these days and my own background is in documentary film and I think a lot about the politics of exhibition and all of that. So for me, what I wrote was I'm very, I'll rephrase it too. I'm pretty concerned honestly as someone that comes from a film distribution exhibition background about the platforms, streaming platforms such as Netflix that the social dilemma is on. For me, the way that these platforms are designed are similar to the ways that social media platforms keep viewers engaged. So for me, the fact that a film like the social dilemma which is criticizing the sort of decisions that these social media platforms make to keep viewers uncritically engaged are in fact the same strategies that Netflix itself uses. So if a film or a docu-series, even though I have not seen the social dilemma because I have major problems with the filmmakers other work, he uses a lot of, I think, advertising and marketing strategies and his works that are extensive. Anyway, don't need to get into that. So my question is anything that's kind of critical of this type of the results of these types of technological apparatuses, how they, as you describe it, like filter and sort people's behaviors. I guess I'm curious for you to talk about your thoughts of whether or not anything critical of these things can be truly critical if it's being used also as a piece of content by these systems. I think, thanks for your question. I think it's a really important one and I think it is really difficult to create content that would resonate. I can tell you from my experience, I remember one time I was teaching my students cookies and everything and one day I saw all of my students come and cover the camera, you know, the laptop camera. Yes, finally everything that I'm teaching them comes through and now they're interested. And then I asked them, so why did you cover the camera? And all of them said, oh, we saw that episode on Black Mirror where, you know, he's being photographed and I was like, okay, okay, maybe it's like halfway. But I think things like Black Mirror, things like stories basically, stories that we can relate to, I think are, and I'm sure as a documentress, you know, the value and the power of stories. I think that could be our way to communicate these kind of things clearer and hopefully to make critique in an engaging and meaningful way for people's everyday life because I think sometimes critique, when we talk about it, and when we try to communicate it, it's sometimes, you know, every day people maybe wouldn't be interested or they don't really understand how it relates to them. So I think if we can translate a lot of our ideas to these kind of stories that people can relate to that can help sort of communicating the critique in a meaningful way. I hope that answers your question. Thanks, Abby. Charlton, would you like to, yeah, unmute, go ahead and ask your question. Thank you. Yeah, yeah, I'd be happy to. Hi. Hey, Eleanor, I'm so glad that you made it to Boston even if it was only in a little while. Oh, right. Thank you so much. Yeah, of course, of course. So this was fascinating to hear. I was hoping that you would close the circle on the talk by coming back to your use of sound metaphors because I find, you know, the argument about, you know, having to sort of stabilize what counted as cookies, stabilize what counted as a human user, but it's not, that's all very convincing to me the turn to digital literacy for sort of, you know, what do people not know that they don't know? So in that argument, where did your turn to sound to help you notice something, make a case that you couldn't have other watches? I think that because, so thanks for the question. I think when I started the research, the first case study is with Bell Telephone and it's with actual noise and sound. And so when I was starting the research, looking at spam, I was trying to think, okay, spam is this kind of disturbance. And so what's actually happening there? And I was trying to see, so who actually created the first sort of communication model? And that threw me away to Bell. And I think that sort of helped me to think about the way that the more we listened to different kind of things, but also the more that we can listen to multiple spaces at the same time that creates this kind of asymmetric power. And so when I'm talking about cookies or when I'm talking about Facebook, Facebook has the most power by being able to listen to my behavior both on the platform, both outside platform. At the same time, I can't listen to my own body because which body I consider to be in my computer, it could be my laptop, it could be my phone because of this, as I said before, this kind of screen which doesn't really allow me to understand who is listening to me. So to me, listening enables us to cross boundaries of time and space. And that means that I can listen to your behavior when you're on Facebook, when you're on Google, and it's an ongoing process. Which for me, we can't really, it can't really happen when we're thinking about it through vision because vision for me is very much constrained within time and space. So I can only be in a specific time and space. And that creates for me a singular layer, if it may. And what I realized with Facebook and all of these kind of companies that are listening to us is these kind of multiplicities of layers. And as you obviously know, I work also on content moderation, which we had a very great article that came today on internet policy review, if you wanna see from our AOAR panel in 2019. And there I examined content moderators and how they listen. And so I noticed with the content moderators, which at the time that I was examining it, it was kind of sort of new, it was 2015, 2014. When I was starting to engage with that, I was actually single. Actually, there's so many different kind of entities whether human or if it's funded moderators or non-human like cookies, who are listening to our behavior. And therefore there are these kind of multiple spaces, which I think vision doesn't really allow us to understand these kind of multiplicities. And for me, the multiplicities are important because they show us how so many organizations are involved in creating these kind of profiles. And again, this doesn't only happen in a digital environment, it also happens as I show with the bill telephone in an analog environment. So when I was comparing content moderators to telephone operators who also listen on the line and were part of the communication channel, this is part of what I was trying to engage with. So I hope it answered. Yeah, yeah. I like that it's somehow, it's sometimes about looking at actual listening practices and sometimes it's a kind of kind of metaphor that you use to train yourself to notice things. I think both of those are really interesting. Thank you. Thank you. Yeah, I think for me, again, it's kind of an exercise. I don't mean that we need to sort of abandoned everything that is visual and optic. I just think that there is, there are more and more scholars who are pushing towards thinking about media through different kind of senses. There is David Fareese's amazing book about the archeology of touch. And so I think that as scholars, and what I really like about this department specifically, as I know there's a lot of artists and people are thinking critically, I think that it just opens more, you know, spaces for us to think about what's actually happening. So it doesn't mean that we need to completely abandoned vision for all the people who like vision related concepts, but it's just another way to examine different kind of media phenomenon. All right. Tabas, would you like to? I'm just gonna read what I said, because I don't know if it's here. So I'm following on the idea of distortions. Have you considered the scenario of algorithmic listening to bots? So the algorithm listening to bots who have algorithmic behavior. So I wonder what happens in that feedback loop and how the global algorithm becomes distorted by the input of algorithms. And I wonder what's the consequences as pullovers for human users, for human users. So what is the disruption of when algorithms, can you repeat that? I'm wondering if the, so, you know, we have algorithmic listening, right? But we also have algorithmic content production. We talked about bots. We talked about how this algorithms produce content. So I'm interested in maybe if you consider that feedback loop, right? How the algorithm listens to algorithmic production and what happens in that scenario? I think that what's really important for me in my research to emphasize that it's never only machines involved or bots involved. There's always humans in every part of the process. So, you know, even with platforms only, when you think about it, only in the recent years, Facebook has actually confessed that there are content moderators at the beginning. They said, oh, we don't know what you're talking about. And then when, you know, people started to push in more and more people started to talk about that. You realize that there are people at all of the points of automation. It's never a full automation. There's always humans there. So I think that when you're talking about that, this kind of algorithmic content and also algorithmic ordering, there are always people in different points of the communication channel that are gonna decide what is deviant. So sort of what is relevant for our business model and what is not relevant for our business model and hence will be filtered out. So to me, that is, it's really important and it's really important for me not to talk about algorithms in an abstract way because then we're sort of taking away the responsibility of the humans that are always involved in that process and the politics of the different kinds of humans, whether they're programmers or the CEOs or if they are the more inferior humans in this process, which are the content moderators or, you know, even in Google, they have the rankers who are deciding how things are gonna get ranked on the Google search results. So to me, what I think what you're asking involves again with this kind of, different kind of decisions made by humans and it's never gonna be a perfect kind of entanglement. But I think that the way that they're gonna respond to each other is very much related to these kind of decisions. I hope the answer. Thank you. We've got two questions from the Q&A and I'll read them out loud just so that they get into the recording. The first is from Radu and the idea of surveillance and data capitalism that is being put forward by the tech giants nowadays drives me to always think of Gramsci and his theory of hegemony. Following Gramsci's strategies of changing the systems, he advocated of people going into institutions such as schools, government offices to change them from within. Would that be possible in the tech industry? Would those change makers be immediately perceived as deviant and thrown away? And then adds, I'm sorry, a long question but I'd love this talking, I'm very enthusiastic, so. No, it's fantastic. Please ask and it's great. I really like the questions here. It's great that people engage with the book. So you're basically asking if it's the matrix and if Neo needs to be part of the system or not. I think that again, in order to change the way things are I think that we need to think about it not as like the one solution, but as multiple solutions and of course different kind of solutions are gonna be more relevant in different kind of regions. So what will be more beneficial in Europe is not gonna be more beneficial in the US or Russia or Asia, China, Israel, different kind of places. I do think that a lot of people are trying to change both from within and outside. I think that we need to have these kind of forces in all of the directions. I think that at the moment the kind of instruments that we have, as I said before, everybody had so many hopes with the GDPR. If you aren't following, and I really recommend you to follow Max Tram, who's an activist who's been trying to change a lot of things and he has changed a lot of things. What he showed is that even though we have the GDPR, the kind of the DPA, the data protection authority in different kind of countries in Europe actually provide very little money to actually take care of all of the people's sort of lawsuits, again, different kind of companies that are dealing with their data. So I do think that people need to integrate both in government, hopefully also in technology companies, but also that we need to educate people. So to me it's multi-layered, both to change laws, to have data literacies, to change the way that technology companies are operating and also to change the way that we are talking about these things, whether it's with more entertaining content, but also journaling. So yeah, it's a multi-year, multi-step strategy. To some degree with that answer, you may have answered the following question, but of this from Hamid Drezah and Nasiri, what you said about social dilemma, isn't that actually one of the main issues? That there are these sources that give the illusion of informing people, but they systematically ignore the systemic problems that make sure that the discourse remains shaped by the powerful institutions. The same thing that Professor Bald mentioned regarding convincing people that it's for their own security or recently convincing people that censorship by the big tech is actually in the interest of the people. How can one fight that kind of informing that persuades people to act against their own interests while making them think that they're actually engaging in resistance? Very intense questions. I, again, I don't have all the solutions. I only think that the way to make people engage is actually to inform them. And I think that people only, I have a friend who's been an activist for many years and he told me that the things that sort of motivates people to go to the streets is when things hit you the most. So in Israel at the moment, people are demonstrating for the past few months against Benjamin Netanyahu and his regime. And it's not that they haven't been demonstrating before, but I think that with the COVID and the pandemic, they realize what they can lose. And I think that this kind of, when it touches your life, when you actually know what you can lose, then people are more engaged. I don't know if people saw that students here had a whole demonstration against the algorithm. So the story was that students were supposed to sort of graduate the high school here. They were supposed to have an exam, but because of COVID, of course, they couldn't have the exam. And then the government designed the weird kind of algorithm that made a lot of mistakes, which meant that people who came from deprived areas received lower grades than they should have. And then students came and started to protest and said, we're having these different kinds of signs, like fuck the algorithm and going to the streets and doing a lot of petitions and like asking and attacking the government. And the government actually caved and changed these results. So why did these students didn't protest, for example, when it was Brexit, which is a question that I was asking quite a lot. And I think that one of the things is that with the algorithm, you can actually see what's happening to you. It was very feasible, right? You can actually see how it can harm you. With Brexit, we don't even know what is the con, what are the agreements, right? So I think that if we actually want to engage people, if we want to motivate people to make these kind of different kind of changes, we need to help them understand how that can actually harm their everyday life. And that, I think, engages people into action. And we have different kind of examples like these. You know, it's funny, I followed that from a distance and I kept thinking it's great that they're protesting, but shouldn't they be protesting the whole system to begin with? The one that orders and sorts them in general, rather than just when it gets it wrong. So, but here. No, totally. And also not only when it comes to their grades, right? But again, if you don't really understand how that affects your life, you, I think it's harder to make these kind of different kind of imaginations on people. And also people, you know, we show how these kind of algorithms impact different kind of, you know, communities of color, things like that, but not all of the people are, first of all, encounter all of these things. Not everybody here or really understand how that can actually influence you in the future. So, yeah. I think I saw another. Yes, Shusti had a question, I think. Hi, Eleanor. Thank you for that. That was great. I actually have a question about the assumption that most people have agency over things like public demonstration, right? So I'm talking about spaces in countries where protesting is not an option. You cannot go out on the streets where digitization is the only way for you to actually act in resistance. And that kind of goes back to the question in the Q&A. It's like, if the digital is the only space you can exist to express your freedom, express yourself, then you don't really have much of a choice. And so how do you change your narrative of data literacy away from cultures like the US, the UK, Israel, maybe where you can go out and protest to societies where you just can't do that? I think it's a really good question. And I think that, yeah, we have a lot of assumptions in the way that, in the kind of solutions that I was talking about as well. And this is why I said that there is no one solution in different kind of regions will have to respond in their own way. I do think that there are a lot of demonstrations in other regions which are not sort of West based and they are doing it with different kind of means such as means in different kind of ways. I think it's gonna be difficult and I think it's, I don't really know the exact way but I think that different kind of communities have always invented creative ways to object and to protest the way that things are happening. And they will have to use different kind of tools that are available. I hope that it answered. Yeah, but I think we definitely can't assume that people always have agency and obviously providing people with data literacies doesn't necessarily mean that all of them are gonna go to the streets because if you're poor and if you don't have the time or money to go to the streets, then you're not gonna do that. So of course, I'm not saying this as the one bullet point and it's gonna free everybody and we're gonna burn the streets and you know but I think that it is a gradual evolution of these things and just like feminism and anti-racist groups it's an ongoing process, right? We need to keep on fighting and it's not like everybody has the time or resources to do that but I think the more knowledge that we will have of what we can actually do and what kind of power we have when we come together I think that we can change that but of course not everybody will be able to participate in these demonstrations. Well, great. It looks like, unless we have another question it looks like we're wrapping up right on time and so I wanna thank you again and thank you for waiting for us hanging in there and showing up when you finally could. It was really, really fascinating talk and I'm glad that we were able to have it. So I wanna thank everyone else for who came and we look forward to seeing you when we go back. Thank you for having me and again, I just wanna remind you that the book is open access so feel free to check it out and also the playlist and I'm on Twitter so if you feel like continuing the discussion then feel free to either DM me or email me and it was a real pleasure doing this event because you had amazing questions that really made me think as well which is quite rare. So thank you very much. Thank you. Thanks everybody. Thanks.