 Hi, Masimo. Hi, everybody joining in. Wonderful to have been invited for this. I've been asked to make a brief introduction about the program itself. So without much further ado, let me do that. So welcome to the book club. The book club started in response to the need to discuss different genres of sci-fi and SSF books, which intersected with the interest of technology and bookie communities. Fidja Lakshmi, Harish and Gautam Chenoy are the early creators of Book Club at Haasgeek, and Book Club also invites discussions on books on AI and ethics, a segment that has tremendous impact on society. And therefore, we're here today with author Masimo Irodi, who wrote an absolutely brilliant book called The Machine Habitus. We'll be talking about the book here. Masimo is a sociologist and assistant professor of sociology at the University of Milan. Just quickly looking at Masimo if that's still correct, because we've made a mistake with that before. No, it's good. Yeah, that's true. Thank you. Hello, everybody. Okay, so a little bit about myself before we turn to Masimo and the book. I myself am Michael Bass. I usually go by Michael in international settings. Feel free to call me Michael. I'm a senior research fellow with the Max Planck Institute for Social Anthropology, which is based in Halle, a small town about one and a half hours from Berlin in Germany. I myself am Dutch and I kind of shuttle up and down between Amsterdam and Halle. In my current project, I'm developing an anthropological approach to AI, and in particular, I'm interested in the question how we cohabitate and co-create with AI. And instead of thinking of AI in terms of a future scenario or some kind of dystopic variety, I myself am more interested in how we live and work with AI. In the earliest stages of my project, I mainly looked into the way artists and creative professionals in the MAQs of AI, but I'm also very interested in the way data scientists, the software engineers in Bangalore work with AI. How do they relate to questions of intelligence, bias, ethics, etc. And it's for this reason that I came across Masimo's book. It's one of the few serious publications, serious academic publications within the social sciences that engages with the kind of questions that AI folks in social and cultural terms. In fact, the book is presented as a sociological take off AI. It's incredibly well researched, but nuanced work that brings together a wealth of literature and perceptions. And the heart of the text is the idea of the machine habitus itself, something we'll be talking more about today. But first, Masimo will provide a brief overview of the book himself through by means of slides, and this will last for about 20 minutes, after which I will kick off with some questions of my own. And after that, we hope you all will join the discussion with your own questions. Without much further ado, over to you, Masimo. Thank you very much, Michael. I mean, for the nice words and thank you also to the book club and Ascii for having me. It's a real pleasure to be here. I'd like to start, if you don't mind, with a quick presentation. Introduce the main ideas of the share my screen. One sec. Just tell me if you can see it correctly. All right, see the presentation. Excellent. So, so this is the cover. And as you can see, the key idea of the book is to forward this sociology of algorithms. Now, probably the audience here knows quite well how to develop our two program, an algorithmic system or an AI model. Here, the take is how to understand the systems. A bit like Michael said, I mean, how to understand how they, what's the role in society. How do they participate in society? And also on the other end, how society contributes to make them what they are, how the culture enters the code and the code enters the culture and participate in the culture. These are like the two sides of the book that's put forward this idea of machine habitus. The idea, the term habitus might not be well known outside the social sciences is a, is a term put forward and developed by a reuse also. It had different, it had different long history, but it was used by the mostly used by the French sociologist, very famous one. And here, the idea of machine habitus is a sort of extension of this idea of habitus that I will explain later on to the word of machine learning systems of the algorithmic systems that populate social media platforms, devices and almost any field of social life now actually is some extent affected by algorithmic systems of different kinds. So the presentation will basically clarify three main points. On the one hand, how the code, these machine learning systems, these algorithmic systems shape the culture and the society we live in. So the code in the culture. On the other hand, the culture in the code so how society is encoded in the algorithmic systems that are developed and that we delegate our decisions to. And the third point is the more maybe abstract and sociological in kind is this idea of techno-social reproduction. Meaning we cannot understand how society works, how the social order of different inequalities, for instance, that constitute society and work are maintained are reproduced. If we do not consider also algorithmic systems and these technologies that are more and more present in our everyday life. Very quick, let's say, premise here, I mean, algorithms. You see the book is a sociology of algorithms, but I'm fully aware as you are probably that algorithm is a very generic word, a term. I mean, algorithm is a mathematical procedure. It has been such also thousands of years ago. I mean, it's something that can be done by hand with pen and paper. It's something that has been at the root of computer science as a discipline, especially with the development of digital computers. But what I my second point in the book is that with machine learning, we are we are witnessing an important shift, which is a shift in. It's at the same time a qualitative shift, meaning the type of algorithms that are embedded in our everyday life and the work and practices are significantly different from the rule following models that characterize the so-called paradigm of our good old fashioned artificial intelligence. These new models have a sort of and they learn from data, you know that. So it's not a deductive logic that guides them based on rules decided by humans a priori, it's a learning inductive logic that animates their behavior. And this is the qualitative shift. And then there is also the quantity shift, meaning these systems are everywhere from streaming platforms to social media platforms to the filters on my phone to the spam filters in my emails to the Dali through which I can experiment making new images. They are everywhere. They filter content. They are used to decide whether I will have a loan or not to buy a house. And so they are very, very important to an extent that is unprecedented. This is also due to the progressive datafication of our everyday life. The fact that now in this so-called platform era, that's how I call it in the in the book. And in this periodization, this period of the year starts, ideally with the Google with the page rank, an algorithm that is a machine learning systems unsupervised machine learning system basically that is autonomous in its in its working and that really ranks and manipulates in almost real time the internet and the pages and the knowledge available on the internet. That's really for me the start of this platform era where data are everywhere. And these data are of course, as you know, they needed what we need to develop these models, what we need to train these models to make them intelligent. As some scholars said, it's a sort of an extraction of human cognitive abilities that are given to a machine, to a technology. So it's not a machine to be intelligent to some extent. It's the data that are in that. Machine learning is what allowed me to produce these images. Here we have a philosopher, Michel Foucault, who came in on that. How is that possible? Of course, because you know that machine learning, we have available pictures of Michel Foucault, of course, in black and white. That's why Foucault in the images is black and white and the omelette is yellow because we have images of omelette in color but not of Michel Foucault. So I mean, this is of course one of the latest, as they found me, developments of machine learning, but of course we have many others. We have many others because the code is everywhere in the culture. The code is, for instance, in an app like Tinder, we try to find our partners and we are presented with a selection of possible partners to interact with. But here what we tend to interpret, especially if we're not familiar with AI, with machine learning, these exchange here between the user and the app is an interaction. It's an interaction, it's a feedback loop to be more precise. Let me present you why I think it's very relevant when we examine what's the role of the code in the culture, this idea of feedback loop. Because here, of course, we have an interaction where you have a selection of content proposed to the user. The user decides whether to swap left or right and this is a feedback given to the system that will adapt in a new iteration the recommendations to the user. These are important elements that we need to consider, especially if you are like me as a sociologist interested in how society works. We must pay attention to how, really, these systems, these algorithmic systems, like, for instance, recommendation systems, produce these sort of feedback loops in the culture, in society. Feedback loops means these technical systems are routed back, the system feeds back into itself. So, of course, I watch videos of cats on YouTube or TikTok even more. And the more I watch videos with cats, for instance, the more I will receive recommendations of videos about the subject. In what cybernetics is called positive feedback loop. On the other hand, of course, I might decide to stop watching these videos and skipping them and watching videos of dogs, for instance. In that case, after a while, the system will, of course, adapt to my shift in behavior. What is less considered, let's say, is starting now in computer science literature to be considered, like in the case of these two papers, is the fact that when we have these kind of positive feedback loops, the risk is that we are producing what is called feedback bubbles. And as a result, eco chambers, meaning this amplification effect by which we tend to be exposed to a selected selection of content that reflects our previous past choices. And sociologically speaking, it's important implication that go also beyond the pure electoral consequences of that, for instance, in terms of public opinion, in terms of the filter bubble of the liberals and the republicans, for instance, in the US. There is more to that because, of course, these feedback loops are also about the advertising that we see or the type of music that we listen to. These feedback loops are everywhere. And with them, they bring also the important notion of bias, for instance. I mean, it's something that is more and more, as you also witness with this book club, the notion, the relevance of the notion of bias is more and more key in current computer science, for instance, and beyond computer science. And here, I mean, it's important, I think, to imagine, to understand bias in relation to this feedback loop. To understand, especially in the case of machine learning systems that are embedded in digital platforms, and that react kind of in real time to the decisions and behaviors of the users. It's important to really consider these systems as sort of participants in society. That's the idea that I tried to go forward here. So instead of looking at treating them sociologically or in the humanities, for instance, and sort of inanimate technologies, a sort of inanimate background of our social life, we need to treat these kind of intelligent technologies as sort of participants that through their classifications, through their selections, through their orderings of knowledge, culture, information, they participate in society, and they interact with us in this multiple feedback loop that we encounter every day, even without noticing them. So the sociology of algorithm, of course, must be interested in how do these artificial social agents participate in society, and with what consequences. There is another point here, the culture in the code. Yes, the code is in the culture, the code affects the public opinion, our musical taste, probably, the products we purchase, or the people we interact with on social media. But at the same time, our society, our culture, our perspectives are in the code of machine learning systems, especially in the code of machine learning systems because of the data. And of course, you might know, especially those of you that are developers or programmers, that in the end, I mean, the culture is in the code simply because we are the ones that provide the classifications these systems are trained on. The case of recapture is quite clear, right? We are, while proving that we're not robots, we are basically instructing and providing data for the training of probably a Google car or something else. It's important sociologically speaking, not to take distance from the myth of the neutrality of technology, of considering technologies as something that is independent from society. We know that it's not true, especially we know that if we have a program and develop the machine learning system ourselves. Because if you think about that, how many choices we make about what data to use, about what threshold implement, what type of goals the system must follow. I mean, these are all human decisions that are made, for instance, by one type of humans behind machines, which are, for instance, the creators of the machines, right? Which of course, because of their characteristics, also socio-demographic characteristics, then eventually to put forward specific visions of the world. But what I'm mostly interested in are, does that train these machines, machine learning systems? Does that provide the data? Which are, on the one hand, the click workers, for instance, that we might recruit in order to annotate images for a computer vision system. But on the other hand, are us, whenever we skip a video, whenever we like a post on social media, whenever we navigate TikTok or other platforms. The machine trainers, our culture is data-fied. Our ideas, our perspectives on the world, our behaviors are data-fied and serve as base for the training of machine learning systems. This is very important to consider, because to some extent, this makes, I think, relevant to shift from the idea of garbage in garbage out, that you know well. The idea that, of course, the type of input data, the type of decisions that we make in the development of an algorithm impact on the results of this model, of this algorithm. But more than that, it's not just garbage, it's society. What's in these data, data, are never neutral. We always carry some visions of society, some tacit understandings of the world, especially if we're talking about human-generated data, texts or images or patterns in consumption or in interaction. These are all data that carry the traces of some social boards, not just of individual characteristics. Because, of course, that's also the narrative of marketing and also computer science to some extent. They did that with recommender systems, we discover your needs, your desires. But these needs and desires, as sociologists, we know, they are never unique. They'll depend on the background of people, whether they live in India or in Italy, whether they live in the city center in a very expensive flat or in the suburbs, or whether they are highly educated or not. These are the variables that count when it comes to understand what people do, what people think. And then, these very same variables, we find them also in the data sets. To some extent, you used to train machine learning systems. This is a data set image, and you might know it, that some years ago was featuring this sort of classifications. And this classification birthed the imprint of specific vision, visions, understanding of the world. So my proposal is to move forward from the idea to kind of leave aside this idea of bias that has monopolized a bit the debate about ethics and machine learning, and to think about the data as sort of context to bring back the idea of context in the data we train machine learning with. Because we know that the data cannot be representative of all societies. We know that the system trained on data, for instance, from the Cornell Movie Dialogues corpus here, a famous data set used to train chapels, cannot be neutral simply because the movies in these movie dialogue corpus are American movies, are Hollywood movies, are not movies from Bollywood. So probably the type of social interactions between, for instance, male and females, between old and young, between rich and poor, the type of, the way we are speaking, the types of masculinity displayed, the type of ideas and imagination that is displayed in this movie, is simply not the same as the movies produced in another country. And nonetheless, that's the root of machine learning if you train a system with this type of data set. So these systems are certainly, it is like if they practically learn example by example, some cultural propensity, propensity is to act in specific ways. And these are learned from the data patterns that are extracted from the crowds of machine trainers that also reflect the position in the social space, allowing this, this is a musical term of these individuals and the related biases. I mean, Dali, rich people having a party, poor people having a party, okay. It's common sense, rich people are different than poor people, right? Rich people having a party have smart dresses and fancy gloves and nice cocktails in nice nightclubs. While the poor people having a party are probably doing some barbecue outside, and you might notice also that the share of black individuals in their representation on the right of the screen is much higher because the one on the left simply because probably Dali knows from the images it has been trained on that there is more white people that is rich than black people. So I mean, these are biases, of course, but we can see them more in a more abstract and sociological manner as the consequences of this data context that bear the imprint of the social world they are produced from. And the same can be said of something as banal as your discover weekly on Spotify. The personalized suggestions that you receive on Spotify are not just personalized because you like, you are the only one that like, I don't know, jazz music or specific kind of trap music or whatever. They amplify your taste and by amplifying and reproposing things that is familiar to you that you might like. And so producing what could be seen also as a filter bubble to some extent the musical filter bubble. They do not just reproduce your taste they reproduce and amplify the taste that is characteristic of a specific background for instance of your people of your social class of your with your educational background. From your position in the world is like if we are socializing. It's like if of course the systems are not part of our world. Of course, I don't think they are sanctions. Okay. But I think that is even the most balanced pump filter or recommendation system is kind of socialized. As a sort of proto socialization through which it becomes a member of society with specific characteristics, based on the type of data context, it has been factored. And I'm including with this explaining now why I think this idea of machine habitus is relevant. Because we, you know, if you do machine learning that I mean machines. They learn in this practical manner right example after example, of course thousands millions of examples but example based learning. They don't need the rules predetermined a priori machine learn what people know implicitly gene rights. And, you know, if you, if you teach a neural network how to classic whether how to recognize a cat or a dog. You don't need to explain a priori what the cat is and what the dog is. It is this knowledge is inductively derived from factors in data. And this brings me to this notion of habitus viper. I mean, what is habitus according to Burdio habitus. It's a system of this position is a set of schemes through which we see the world to which we act in the world. And what Burdio proposed was that these schemes these ways of classifying the words are not individual characteristics are not just a matter of the psychology of an individual of the personality of an individual. They depend on the experiences of people and these experiences are socially conditioned are conditioned by the fact that you grow up in a big flat with a lot of books. Or in Islam with no books and no running water. And that makes the difference in the ways then you perceive the word you classify the word you understand the word. So the idea here is that we can shift we can say okay, this embodied systems, these dispositions that guide our actions without us even understanding that without us even being aware of that. I mean machine learning systems have something similar at these schemes these that they derive from patterns in data that make them act and classify the world in specific ways without the need for them to be sanctioned or conscious. And you can see also from this quote here. There is a really even in the German that sociologists sociologists used to find habitus. There is this statistical kind of terminology. The habitus is something that biases are implicit micro anticipations of the kind of work that we will encounter at each moment, expecting the future to preserve the experiential correlations encounter in the past. You see what is what are these experiential correlations, if not the really the root of machine learning. What is in the data we train machine learning with, if not the sort of experiential correlation of the people that train the system that produce this data, and even would do himself at some point said okay, maybe the habitus is a little bit like a computer program and we can see that way. So that's why I think this metaphor could be useful. I mean, this is the mass misclassification of Facebook, the content, moderate automated content moderator is in Italy and basically this partner, partner was wrongly classified as violent. It's not, we can understand the machine habitus behind it, we can understand the type of images these systems being trained on, because it's a quite splatter Panacotta is a bit like a Tarantino like Panacotta right. We see all these Jews, which is a bit like blood. And that's the machine habitus it's in this sort of ways of seeing the word right that we are now giving to machine learning systems to this means we have responsibilities. We have responsibilities because if we are providing these systems with ways of reasoning of producing the word of classifying order in the world that depend on these dispositions that are not neutral that reflect inequalities that reflect historical patterns of power distribution of cultural traditions, we must ask okay to what extent what kind of social order we are making machines reproducing. And this is something I briefly tried to study for instance in a recent paper when I asked myself okay on on YouTube, a lot of people listen music through YouTube and on YouTube there are no music genres. So it's interesting to see whether the recommendation system of YouTube the related videos algorithm is kind of sending music videos also based on their genre, because this would mean that it reproduces from the data it is fed with some cultural patterns that is there some boundaries in this. And they found out with his investigation that 90% of music videos recommended together by the related videos algorithm on YouTube are of the same music genre. Given a video a the recommendation be as 90% probability to be of the same music chart. So you see there was a reproduction of this cultural logic of music genre which is not necessarily the only way in which one could listen to music to understand music. It's reproduced by this technical system, and at the same time I looked at how YouTube users to interact with music. And this called basically that also music listeners, even in this very fluid and democratic, let's in way of listening to music that characterize the digital era, where more than the same, the majority, the vast majority was listening to music only one genre. So one could ask, okay, what's what's going on here is the system is recommending videos of the same music genre because it derives, it relies on the patterns of the users, or it's the other way around meaning maybe the system by proposing videos of the same genre encourages the users to listen to music in that way. And now my provocation is that it's almost impossible to answer right now to this question, because machine learning and human beings are so intertwined in the making of society, this is almost impossible to disentangle. Thank you very much. Right. All right, thank you, I mean that was great I mean I read the book probably two three months ago so this is an excellent refresher again to and kind of also through it you, you showcase the depth of the office but also the new ones. And I'm kind of sort of the way you've also built upon such a vast amount of literature, I mean it must have been a colossal task to even sort through all of that and while reading it I thought an AI could have helped you probably with that like you know, so maybe at some point we'll have a deli version, or we already do apparently. I mean, Google scholar was closing out already. Maybe at some point but it made me also aware of how much there already is out there, but how much we also need studies that make sense of that kind of wealth of information to to push the discussion in new direction so I've prepared some questions. Some of it you've already kind of touched upon a little bit but I don't think it's a it's bad to repeat some of it because for some of the audience some of the sociological terminology might be a little new. But let's start with something that I thought was very fascinating at the very start of your book. You started with, which are kind of sort of a science art project. IACOS or IAQOS. I don't know if there's a different way of pronouncing it in Italian. IACOS might be the way we pronounce it. Can you tell us a bit how you what it is and how you came across it. This question. I'm very touched by this question and I will explain you why in one moment. Basically, IACOS is a neighborhood open source artificial intelligence system. It's a sort of an art project by two wonderful human beings. Salvatore Iaconese and Oriana Persico, artists, engineers, acres and many other things is very hard to classify them. The idea is quite simple. They introduced in a semi-peripheral district of the city of Rome. A simple kind of a system, which algorithmic system based on neural networks, which was able to basically interact in natural language with people. And that was trained based solely on the knowledge and terms and voices and documents produced by people from that neighborhood. The neighborhood of Torpignatta. So basically in this neighborhood, this AI wasn't connected to Wikipedia or Google news or Twitter. It was an AI that was trained as a sort of member of this neighborhood of this district. And by doing so, what they wanted to show was on the one hand the possibility that AI and machine learning can be used as tools for making connections among people. Because in the end, the system was implemented on tablets and small devices that were put in the bars and cafes in the school of the district. And you had people interacting with these systems and presenting to them some questions. And if, for instance, some school kids asked to Iacos, what is an astrophyticus? And Iacone said, I don't know. And so basically they organized and they provided an answer to that question. And that's the answer to that question that now probably that system would give. Because that's the only way, I mean, that's the way in which it is trained through these interactions, a bit like us, a bit like us human beings. So this system was a very interesting example for me to show how we can really see this idea of AI systems, machine learning systems as socialized, as kind of a be trained with data that cannot but reflect the specificities, the cultural specificities of a specific context, a specific neighborhood. Because in this case, this neighborhood is a semi-peripheral neighborhood, quite poor one, a multicultural, very different from other neighbors, richer neighbors of the city of Rome, for instance. And the type of imaginary that is then encapsulated in the terms that Iacos would employ in conversation in the types of topics that it has learned about is very much reflecting the specific social background, the specific social characteristics of this neighborhood. And I was saying that I'm touched by this question and I'm really happy to talk about it because Salvatore Iaconesi, one of the two authors of this art project, unfortunately died yesterday. So it was, but I encourage you to look up his work, Salvatore Iaconesi, if you look for, search for him also Google or another search engine, you will see that he has been fighting with cancer since a while now. And he is an anchor and he opened up the results of his examinations in order to make the cure to his disease an open access enterprise. He also did a TED talk on that in the past years. At some point the cancer was disappeared, but now unfortunately it has, it came back and he basically killed him. So for me, thank you for the question, he's a way for me also to recognize, acknowledge the big contribution that he's thinking had on the development of this book because this art project is an art project that really was able to I think bring out the many sociological and social aspects of AI. So look it up, Iacos, Salvatore Iaconesi. Yeah, thanks. Obviously it caught my attention immediately working with artists whose work have been following for a while now in India and it is remarkable on how this particular project also provides such a rich way into the material but also kind of is revealing for how this kind of sort of interaction also can be so productive for people on the street and we say don't see enough of that. Yeah, kind of sort of to understand what AI is, which is a question that I find particularly interesting, not in terms of its technological technical answer, but mainly as one that evokes so many mixed feelings and it's kind of revealing for the way we relate to this technology but obviously I'm incredibly sad to hear about his passing. Wonderful that we could at least talk about his work here a little bit. I'll do probably one more question and then we'll see from the audience as well because there's always time is short. So one of the questions I was thinking of and I was hoping you could say a bit more about that is that much has been made of the intelligence in AI you already mentioned a few things about it. We could think of intelligence in many multiple kinds of ways, myriad of ways, we can think of it as agency as autonomy or even consciousness awareness. More recently there was this incident at Google where an engineer claimed that the AI they were working with is sentient. I believe he was initially put on hold and probably now was fired. Of course it caused an enormous storm. What do you make of such claims from a sociological point of view? How can we approach it without kind of succumbing to the technicalities of what AI is? Thank you for the question. Let me say that as let's say putting aside sociology for a while I'm very much fascinated by this. As anyone I think also with an interest in science fiction I think is regarding these issues of the possibility of artificial intelligence. But as a sociologist to be honest this is not a question that interests me much and I will explain why. Because I think that the point here is not whether the AI system is conscious or sentient also because it's not very easy to determine that to be honest in a sort of objective scientific manner which was in the case of this Google employee actually. But what's particularly interesting if we want to understand the impact of these technologies on society and on social life more in general is whether they have the capacity to communicate, to interact, to shape society and culture. So it's like a secondary question to some extent whether they are really intelligent or not because probably the point here is on whether these systems can communicate with us. And this is a point that I'm not the only one making because also Elena Esposito another Italian sociologist that has been working a lot in Germany has been made in a recent book on artificial communication. The key idea is that okay maybe these systems are intelligent maybe are not but what matters here is that they are able to manipulate culture to manipulate language communication and interact with us. And I think now we are at an historical moment where we cannot deny that. Even a banal spam filter to some extent that is separating my spam from my home in my Google email is already contributing like to shaping my understanding of the world. And I interact with it anytime I flag an email as spam or phishing. And because this makes him or her or it as you prefer, learn more from my perspective, from my classifications and adapt in a new iteration, it's working. So I mean what I mean here is that now especially when we see something like Dolly or when we see language models, when we see incredibly advanced digital assistants, it's impossible to deny that they can communicate, they can manipulate language, they can interact with us. And maybe that's what matters because we have seen also from the literature from human computer interaction. But even if a person knows that Alexa is not just a conscious or doesn't is not like a human being, people is kind of likely to treat Alexa as a sort of agent in a conversation like thanking Alexa or basically interacting as if they're interacting with a family member. I mean, so to some extent, I mean, yes, intelligence is very interesting question probably more for cognitive scientists or other other let's say branches of scientific research. In my case, I think in the case of sociology and the social side is more broadly I think the main focus should be okay we need to understand how do these systems change society. How do they transform interaction, how do they intervene in very subtle ways, and often not very transparent ways in a way we know we consume we are getting formed and we interact with each other. I don't know if I answered the question. So so let's let's take one from from the audience. We have a question from Steppenwolf. Because the song now is immediately playing in the back of my mind that's another AI thing I guess but the recommendation bias has been flowing through our systems over 10 plus years now. What do you think of the do not track slash privacy systems and impact they will have on these echo chambers. It's a very good question. Yes, more more people of course is adopting some kind of a blocker or systems for reducing the flow of information private information to companies. Still, I mean from there is also a strand of research in media research communication science and sociology that studies for instance how users are people basically relate with technologies and the I systems, whether for instance they are aware of the fact that there is a recommendation system there or that for instance, whether they are aware that their newsfeed on Facebook is basically the result of an algorithmic process or not. And unfortunately, you know, the behavior. I mean, those that are using markers or decent of of sophisticated systems to resist the kind of the pressure by platforms to to extract data are very minorities, meaning the large majority of people and actually tend to forget about these issues as they do with the terms and conditions of the apps they install on their phone. And this is also correlated with some social demographic factors people with a technical education people with high education to be more concerned by these issues of privacy. Whether people with lower education for instance, they tend to be less aware of them. And so I mean, this is another aspect of the way in which we need to really pay attention to these issues because there is also the risk of a gap between those that are more skilled and prepared not to be manipulated. For instance, the case of electoral campaigns for instance on social media and those that are more kind of being manipulated because they don't have the cultural instruments to make sense of what's going on on on this. So yes, I mean, maybe it will change make the difference a little but only a little bit, because they, I think for the majority of the population, this is a non issue. Right. You're giving me a lot to think about again. I have a question from Fikus Chahar is asking for your thoughts on religious texts as algorithms and its relevance in contemporary world, anything. You know that is so religious tech human. Oh, I can just copy it quickly probably your tech that direct message how do I do that. This way. There you go. You can read through the question as well. Thanks as algorithms, but if they are algorithms, they are rule following systems for sure. Meaning, if I have interpreted correctly the question meaning you think that we can kind of interpret the religious texts like the Bible or the Quran as algorithms. So it's yeah, it's a fascinating question. I don't think I have a specific answer for sure. And if they are algorithm meaning because they indicate some sets of procedure to get the specific output, which is probably immortality or eternal, the paradise or whatever, like paradise versus hell sort of output. I think they're not like my children. My children assistance I've been talking about. I can, I can maybe add a little bit here a friend of mine. Thanks about Baron Drift of Leiden University has recently gotten a large grant from the EU to look into this specifically addressing questions of Islam in Malaysia, Singapore and Indonesia. If you think of it like as a tool on your app, increasingly, people use for instance AI technology to to ask questions about that. So not so much the Quran but the, the law texts are kind of sort of everything that has ever been said and attributed either to the prophet or otherwise. So if you use these technologies kind of to find an answer but also in terms of for instance, food, like if something is Haram or not, kind of, you know, if you have a question. If you have a product that you see in the supermarket and it isn't entirely clear on what, you know, it was made of or what kind of factory, there is a way kind of serve. You know, as a tool to sort through that massiveness of information but it has an impact on on how people don't behave as religious people. Another example that comes to mind is actually one that I'm writing about right now, which is an artist from Kerala sleeper Paul, who did some experiments with this from the perspective of the Syrian Christian community where he started with, if it was possible for AI to generate kind of alternative Bible texts and if people would pick up on them. And he proceed as a very interesting tech talk about that which you can find very easily is named sleeper Paul incredibly clever guy I met him in coaching earlier this year. There's lots going on in this field as well and obviously it will have an impact. I hope because that this answers your question a little bit. Fascinating. Thank you, Mike. Fascinating stuff. Yeah. I moved towards the end of the session. I mean, if there are more questions from the audience. I'm sure David will let me know. But meanwhile, I thought let me ask you, where do you think we're heading next in terms of AI research, especially from the social sciences. Where should we train our attention what's what should set the agenda the common years you feel. Yes, of course I cannot talk only about the social science research on AI, because that's my field. And regarding that field for sure what we're seeing is that we are something that is already happening is that we are moving away from the kind of naive idea of AI manipulating humans for sure, which is already a good step forward meaning the idea that yes, in the aftermath of the Cambridge Analytica case, for instance, they were a lot of discourses about okay, they're using algorithms, and these algorithms can basically change the world, change the electoral results and manipulate the mind of people. So, in particular, to target the inner demos of people was the words that were used by one of the whistleblowers in that specific case of Cambridge Analytica. So, now we are moving, we are problematizing this assumption of a bit naive assumption of the sort of an AI system or that is able to manipulate and manipulate a human being because I mean the way in which a system that classifier recognition works is not that like like a magic pool that then automatically changes the mind of the people that are exposed to this content. People always, I mean if you think about also the relation that we have with the recommendation systems we encounter in our everyday life, we don't listen to everything that is proposed to us in music. We don't read any recommended news or recommended product. We are often pissed off with the recommendations or the ads that we receive. We are, we resist to some extent algorithmic system, recommendation systems all the time. And that's also without blocker and this kind of system we were mentioned, which means that we cannot take for granted. We cannot say have embraced the apocalyptic sort of idea that yes AI is taking over the world is manipulating our minds, etc. Because AI is always in interaction with the society and with us we have some agency to actually resist it. This agency, as I said, also depends on some socio demographic variables like the level of education for instance. And so of course this is an interesting complication that now it has been recently studied. In the future probably really I think that the what we lack I think is more data about how real life data meaning the type of data algorithms and platforms are fed with actually. In order to study really if we can talk of filter bubbles if we can really talk of a sort of a. The creation, for example, the polarization induced by filter bubbles, political polarization or cultural polarization if we can really talk of the formation of eco chambers because of algorithmic systems. These are questions that scientists have tried to answer since a while now, but what they have been lacking so far is a lot of real life data. Why because these data are not available to researchers are owned by platforms. So the big issue here probably when we talk about the critical study of AI or algorithms is not AI or algorithms which can be perfectly fine. As in the case of Yarkos, there was a real great system that was making good in a specific neighborhood because the data were open and were accessible. Here the problem also in the study of AI and algorithmic system is that the data are not open platforms are very as you know secretive with their data so it's very hard to really study the impact of the code in the culture. But unfortunately, there are some initiatives, for instance, tracking exposed to an initiative which by some computer scientists and researchers in Amsterdam that have created some browser extensions that can be used to track what kinds of recommendations, personalized recommendations you receive on platforms such as Amazon, YouTube, Pornhub, for instance, and others and also probably very soon TikTok. This is probably a tool which can be very useful of course for the user also to realizing and problematizing your own sort of bubble you live in in a more kind of scientific manner because then you get an Excel file and CSV file with your older recommendations that you have received in the past few months for instance. But it can be an extremely useful tool also for researchers that won't really study whether, for instance, algorithms change my taste in music or whether algorithms change my perception factors or reproduce and amplify some inequalities. Yes, we have interviews, we have surveys, we have interviews with experts, analysis of data sets for looking for biases, but we need something that I think is able to grasp better these ubiquitous interactions between humans and machine as they unfold on platforms for instance. And I think what I would like to do and others also would like to do is really try to study these things we realized data and probably these new tools will let us do that. Yeah. Yeah. I just thank you for this. I mean, it's a very rich response, reflective of your work, I guess. I just came out of a meeting myself with a different team based in Bangalore and Pune. And we were talking about the implications for medical for mental health of, you know, of AI use of various technologies and you see two sides one side is the the opportunistic one there's all sorts of things possible now in the broader medical health technologies and mental health technologies on the other hand there is now such an affluence of of apps that that offer more or less free ready to use products that almost by default start by asking rather personal questions to give an example. This morning I downloaded an app that caught my attention called Stoic and I thought I wanted to see what it was. And it, the moment you start the app it asks how you're feeling and it asks how you're describing that but of course from the very beginning there is almost no clarity on where that data is being used. I'm sure there is kind of sort of something that you need to sign with your little name, you know, ticking a box somewhere. But who goes through that, almost nobody because that's way too long to read and nobody has time for that. Moreover, these apps are specifically designed for busy people. That's what they all claim, you know, you only have to meditate for five minutes a day. But meanwhile we're kind of sort of give up a lot of ourselves and we feed them the app we feed its use and the data itself going to be employed for in myriad ways. I saw one final kind of sort of question remark from Vickers on relating to the impact of recommendation engines on cultural diversity. Any final thoughts to share on this? Really, I mean, I can repeat what I just said, meaning I have the also based on my own research, I have the impression that they do not enhance cultural diversity. They tend actually to create some cultural bubbles in general, but to more data, and that's what I'm working on actually with others. We need more data to really answer this question in a kind of scientific manner because it also depends, I can say that it depends a lot on the platform. It depends a lot on the recommendation method. So we need also to be really attentive to the specificities, technical specificities of the systems we're talking about and avoid to talk about algorithms or recommendation systems as in abstract terms as if they are like all of the same kind. They're not. Thank you for the question. Well, we've come to the end of this session. First of all, thank you much more. This was this was so rewarding, so interesting. Kind of reminds me that I need to read the book again. For those of you who haven't read yet the book, get the book. There's an eversion that I have for my Kindle there's he's holding it up. It's really wonderful also to have in papers and you might as well buy both versions. Thank you also has geek for organizing this and they David in particular for putting it together. Thank you for doing this on some final things that I was asked to say before we round off is that the video of the session will be edited and uploaded in a week's time. So thank you to all the participants and interested persons are welcome to join Huskies telegram channel. And the address probably will be shared a shared through the announcement so let me not spell it out here. But through that you can learn more about Husky, as well as the book club. It's truly wonderful to be part of the Husky community on telegram so I highly recommend it. All right. Thank you have a wonderful evening I had for those in India elsewhere here in mainland Europe we have a bit more of the afternoon to enjoy. Thank you Michael. Thanks to the ASCII community. It's been a great pleasure. Thank you. All right. Thank you. Thanks all. Thanks audience. Thank you.