 Good evening everybody. I'm very happy to see you all here tonight, and I can welcome you in the name of the HIC, and in the name of the Brandenburg Center for Media Studies. HIC is short for Humboldt Institute for Internet and Society, which you all successfully found tonight. And I'm very very happy to be opening this evening as an opening to a three-day event, a conference as you see for infrastructures of autonomy, about infrastructures of autonomy, which a team of the HIC, together with people from the Brandenburg Center for Media Studies, came up with, we already had a workshop last year and we're very happy to welcome very international guests to the conference and to this event tonight. And you're all here because we are listening to the keynote of Beate-Rusla tonight, and I have the great pleasure to introduce her to you, which I immediately said I will do when Thomas asked me to. And actually, yeah, I think I never enjoyed it as much as tonight to introduce somebody, I have to say. Like to the formal part is Beate-Rusla is a professor for Ethics at the University of Amsterdam, so in the Department of Philosophy. And I will explain to you why I think of her so highly. Because I think that Beate really embodies for me what a philosopher should be today. And that is because Beate manages to write about very very timely topics in philosophy. But also for our society and really reaches people. Her famous book on autonomy, an essay on the life well lived, was very widely recognized and very widely widely discussed and reached a lot of people also in their own thinking about their own lives about autonomy. So she's a really well-known expert on the topic of autonomy, which is also why we really appreciate her making that effort to travel here and open that evening for us. But she also wrote about a lot of other topics concerned with the digital society, with technology. She wrote a lot about the theory of privacy and freedom. So I think for me the second reason why she's a philosopher as I think philosophers should be in our society today is because she always gets involved. And still strikes the balance of never leaving her philosophical profession in the way she gets involved. And last reason is that Beate has a very yeah, I would say she's very aware of the social dimension and that not only in theory, like for her privacy theory, for instance, the social dimension plays a very important role, but also in personal encounters. And that's why maybe I remember, and I will never forget the first time we actually met and we had a beetroot salad and a coffee next to the University of Amsterdam and had a really interesting conversation and she was very welcoming to me as a PhD student at that time and my PhD supervisor was also a professor at the University of Amsterdam. And that's how we met and I really will never forget the conversation that we had that afternoon. So Beate for me was also an encounter, a personality. And that's the last thing that I think a good philosopher also is and has a personality with a standpoint that you can agree or disagree with but that you will never forget or that you cannot ignore. And that also brings me to my last point that I think learning is always also about an encounter of a person, something that we definitely learned throughout the pandemic, that learning is also about encountering and interacting with people and that's why I'm very very happy we're encountering Beate Riesler tonight and that we all are here to interact with each other and encounter each other here tonight. So with no further ado, the floor is yours. Thank you very much for being here and I hope you have a wonderful evening. Thank you very much. Encounter sounds slightly ominous, I think, but I hope you'll enjoy the lecture. I'm very happy to be here actually and I must say I was very much impressed when I read the call for papers since it seems to be an incredibly informed, really well informed, thoughtful and state-of-the-art description of the topics. I was especially very happy to read that the infrastructures of autonomy precipice or work with a relational concept of autonomy, which as you will see in a second is, I think, just, you know, state-of-the-art autonomy theory. What I hope to give you in the coming roughly 40 minutes is different perspectives on central challenges around the question of personal autonomy. Actually, I probably will forget doing this at some point. So what I'm going to do is I'll first say something about the concept, then I'll say something about the body, the autonomous body, then I'll say something about the technologies and here, especially on the idea of an uncanny valley and how to manage that. And in the last step, I want to focus on the political dimension of autonomy and I'll discuss briefly the most recent book by Habermas on the public sphere. I suppose many of you know it already. It's very short. It's very unhabermas and short. In the end, I'll just try to draw the perspectives together and I'll say something about the question whether autonomy can be programmed. That was one of the questions, one of the really intriguing questions in the call for papers as well. So the background of my paper forms a question or a problematic which concerns our lives in the digital world as a whole. You probably all know the so-called problem of the uncanny valley and if not just be patient, I'll explain it. What I want to do is to use this idea of the uncanny valley as a metaphor for our living in the digital world and to explore the possible limits or at least the challenges for autonomy. I'll explain this use of a metaphor, of course, in my paper and I think there are very good reasons why we should use this uncanny valley as a metaphor to think about not only challenges to autonomy but to our lives in the digital world. Autonomy is a harshly criticized and often rejected concept. I think that this rests on multiple misunderstandings and I'm convinced that the concept of autonomy remains a central critical concept, especially in the digitized society and will therefore have to continue to play an essential role in social criticism. I'm really sorry. I have a terrible cold but I tested negative, so I've got, thank you. Yeah. Well, maybe a new one wouldn't... Thank you. Say the concept. Let's start with the concept. I was delighted to read, as I said, in the call for papers that the autonomy is conceived of, thank you very much, in a relational way and even in a very natural relational way. So it's not usually autonomy has been considered as blah blah and now we have to see it as blah blah. It's just autonomy is relational and that's it and I think that's totally correct. Because I'm rather tired of reading criticism of the concept of autonomy, which take aim at a traditional solipsistic, conventional, male conception of autonomy. This concept has been criticized plausibly and successfully for more than 20 years and no one who is actually a state-of-the-art also nowadays still tries to be original with their critique of the solipsistic Kantian model and I think even that is wrong because Kant's concept wasn't solipsistic, but anyway the concept of autonomy substantiates the concept of freedom. Freedom is usually taken to have these two dimensions, the positive and the negative. Positive is you have to have certain options in order to really be called free, negative, you don't, you should have no obstacles and autonomy adds a certain element and that is the relation of the person to herself. So the free person can just, you know, have options and and no hindrances, but the autonomous person is someone who can reflect on her options and about how she wants to live. So there's a certain self relation in it. Um Yes, so I have a definition I'll say something about that in a second because before I want to go into the term in a little more detail I would like to briefly explain why the term autonomy is so important. On the one hand I should do this away because I hate it when people or students read the PowerPoint. Well, I'm not yet there. So autonomy is so important on the one hand because it's fundamental in all liberal democratic societies with the rule of law fundamental at least in the sense that it's in its constitution. Without the idea of individual autonomy, we cannot explain the idea of individual rights and democratic procedures. Private with political or collective autonomy together, substantiate and establish the democratic constitutional state. So on this background, let me start with a rather broad and basically uncontested definition. A person is autonomous when she can in principle rationally reflect and knows in principle what she thinks intense and once that's what we call competent and when she acts for her own reasons that's what's called authentic. She does this and can only do this in relations with others, relations which should be understood as enabling conditions. The autonomous person is in her autonomy dependent on others. This is what relational autonomy means. So an autonomous person can deliberate and reflect with others on her aims, short-term aims and longer-term aims and then if there are no further obstacles do what she has decided to do. A person is autonomous when she lives in an environment or in a social context which in principle encourages her autonomy and supports the idea of autonomy in general. The fact that autonomy is relational also means that it establishes respect for the autonomy of others and that's respect for the ways in which these others understand their respective lives. Relational autonomy means that we always conceive of ourselves in connection with others also with others from different cultures like Dutch culture. I'm aware of the fact that this concept of autonomy comes with a lot of normative demands. I'm absolutely willing to explain this idea of respect and so on, but I went to that now. I can do it in the discussion because I want to go on, move on on the basis of this definition. When we look at the relational concept of autonomy, the relations have to be understood and analyzed in different dimensions ethically, socially and political. Three different dimensions of the world we live in and ethically it doesn't mean morally. Morally the whole, I mean all these dimensions have a moral aspect. At autonomy is of course still a moral concept although you can be autonomously immoral if you want to. So that's, I'm not going to explain that here, but I can do that later. Ethical, I mean this self-relation. If you want to think about who you want to live, then this is often called the ethical dimension of your life. So the last aspect I want to point out is that autonomy is not simply an ideal which we can or hope to follow although we know that it's never to be achieved. It's rather that the concept of autonomy itself guides us in our daily actions and aspirations, at least implicitly. Of course we don't always think about how can I be autonomous, but if you do have a problem like a real problem like should I move to Berlin or Amsterdam or I don't know, or should I do something different with my life, then you start reflecting on that and that is what it means that the concept of autonomy is not an ideal. You can do that, believe me. So of course we can never be perfectly autonomous in our lives and the reason for that is that we don't live in perfectly autonomous societies which means we don't live in egalitarian societies. There only if we could live in a society where everybody had the same opportunities to live their autonomous lives, could also we ourselves be autonomous. That's a relationship which I think is actually really important but again this is only a general conceptual framework which I'm happy to explain later but not now because I'm not only talking about the concept obviously. So since the conditions under which we live are non-ideal, that's what it's called rather euphemistically in political philosophy, so non-ideal conditions are just the patriarchal, racist, economically discriminatory and so on and so forth conditions our societies have. That's what's called non-ideal but that doesn't mean the non-ideal conditions don't make it impossible to act autonomously. That's what I tried to explain earlier. So for instance feminist theories have always criticized patriarchal social conditions by using the emancipatory power of the concept of autonomy. A social criticism is based on the concept of autonomy and aims at establishing the same value or worth of freedom. For all people the worth of freedom is the idea of rules. Rules thought about that the abstract idea of freedom doesn't help. What we have to have in a just society is the equal worth or value of freedom for everybody. And the question of course is how can we use this concept to take a critical look at the dominating non-ideal conditions we're living in and that for this conference the most important question is what can we do in the critique of technologies if we want to criticize technologies. The second point, the body. Mostly theories of autonomy are not very explicit about the body of the autonomous persons. You just saw that in my definition. And equally the other way around in the philosophical literature on illness for instance or on disability the concept of autonomy doesn't play a central role probably because of the mistaken and misleading idea that autonomy is a male solipsistic and so on and so forth concept. I want to make three points here. One on the relation between dependency and relational autonomy. The second one on the datafication of the person. And the last one is a very brief one on illness and why robots can't get ill. I find that an intriguing question. And I'll explain it, I hope you'll agree. Okay, so the autonomous person I described earlier is in reflecting or thinking about what she wants to do, how she wants to live. She is already a person with weaknesses, with feelings and emotions like love or grief or fear which form part of her reflections on what she wants and who she is. I'm losing my voice. This has nothing to do with the setting or with the paper. It's just that I really have a cold so probably at a certain point somebody else has to read this. So on the one hand the healthy body is, as the phenomenologists say, transparent. That's really a very useful concept I think. I don't even perceive my body in everyday life when I'm healthy and able-bodied unless the circumstances force it upon me. Because I fall or because I'm ill or because I'm prevented from doing things that others can do easily. That is the transparency of the body we just look through it. We don't even think about the body. But on the other hand, because autonomy is always relational, we are always dependent on other people. So we're not only dependent. There's a beautiful German word which is as British people explain to me not really translatable. In German I would say man ist angewiesen auf andere. If somebody has a better translation than to be dependent on, then I'm happy to hear that. So you need other people. But angewiesen sein is just a little less heavy, so to speak, as being dependent. But anyway, as autonomous persons we're always dependent on other people and we are dependent on external conditions. And that is mostly depending on what demands our body makes. So I would like to claim that from the fundamental conceptual perspective of autonomy, there's no categorical difference between being disabled and being able-bodied. Since we conceive of ourselves as always dependent on others, that's the perspective of the concept. And as many others, as many authors point out, for instance Hannah Whitten in her YouTube videos, I don't know whether anybody knows her. She's a young feminist. I learned these things from my student. It's not like one always knows whether to tick the box disabled, if you fill in some form. So she says the categorical distinction is very much contested. Since it's not clear what disabled means in the first place. Hannah Whitten herself is chronically ill with many restrictions in her daily life, but she finds it unclear whether she should classify herself as disabled. And she uses this to make a point about classifications of bodies in general. On the other hand, and this is something hugely discussed in the literature on disabilities, of course it's obvious that many persons need more and other enabling conditions to be able to live their autonomous lives. Discriminations can be very different if you're very small or if you're chronically ill, and the different vulnerabilities pose different challenges to autonomy. These vulnerabilities can concern mental, by the way, as well as bodily disabilities. So even though as autonomous people we're always already dependent on others, for some of us there are extra hindrances. So the second point concerns the datafication of the person. And I suppose everybody in their working context has encountered the idea of the digital person, the digitized person. And you know, when all the data, do I have to explain this? No, I'm not. No, good. How we become our data applies first of all to all people and their bodies, whether ill, healthy, disabled, a person of color, white and old, young, small, big. But we can see pretty quickly that forms of discrimination exist in the digital world, just as brutally as they do in the offline world, and that both mutually reinforce and aggravate each other. So for instance, Caritsis and Letinimi show that datafication means, and I quote here, additional marginalization. Technological advancements come with ability expectations and highlight the exclusion and discrimination of disadvantaged segments of the population that result from failing to meet digital ability expectations and reach prescribed data norms. The authors introduced the notions of data ableism, and I quote again, and data disableism, which encapsulate privileged ability expectations pertaining to data production and the resulting forms of exclusion that are prevalent in automated societies. Underlining the intersectional nature of data ableism, they discern its two main mechanisms, which they call data invisibility and data undesirability. The in is in brackets and the un as well. And document the role of the free market ideology in producing and upholding data ableism. I think it's, I hope it's rather clear what's meant. You can use the data, you collect the data in different ways, you sell the data in different ways. If you are classifying a long ableist criteria, that's what they criticize and complain, not using the concept of autonomy, unfortunately. But it's obvious that we can do that. But there's another side to datafication, which is also analyzed in the disability literature, because it's precisely the technological infrastructures which can enable degrees of autonomy and empower people to participate in the digital world. An example would be the new developments in diabetes research, with which a chip is used to measure blood sugar practically all the time, and process the data so that a person always knows exactly what she can eat. Especially from a feminist point of view, Laura Valano writes that feminist theory has introduced alternative sites of knowledge and production, or a story of knowledge production and engagement. It draws on new materialism and feminist theories of nature, feminist theories of embodiment and technology, in order to analyze the disabled cyborg body as an epistemic site of feminist science. She especially analyzes, actually, diabetes patients, because she's a diabetes patient herself, but thinks that this is a feminist perspective of knowledge production. What do we know about the body? That's a big topic, of course, at the moment, feminist perspectives on the body and illness. What she wants to say is that datification not only leads to relations between autonomous persons being more unequal, but it also leads to emancipation of people, if they're used in the right way. Let me briefly mention a third point. Human illness, human disability, and human pain reveal very distinctly the borders between humans and robots. Robots can't get ill. If they malfunction, they are either broken, kaput, or wrongly programmed. What we can learn from this and how we should interpret this distinction is, of course, open, and there are manifold possibilities, really interesting possibilities, I think. We cannot only identify anthropological insights here, what is a human being, what is illness, but we also need to think about ethical aspects. I would like to claim that this is one aspect of the uncanny fields, which I referred to earlier. What precisely are paths we should take in the uncanny field between robots or robotic existences and human life in the digital world? Does relational autonomy apply to our relations with robots as well? Can they teach us something about ourselves, about being ill, for instance? Would it help to conceive of ourselves as cyborgs, in the sense that Donna Haraway has coined the term, beings who are always already a hybrid and which should be called post-tune menists? I can't answer the questions here, although I'd like to point out that as a series of autonomy and individual rights, I'm certainly not a defender of any post- or trans-human visions, but the questions are perfectly legitimate, of course. What I wanted to demonstrate here is that the uncanny field I was talking about already starts with questions of the body of the autonomous person. Okay. My third point concerns the technologies. The digitization now permeates as we know every fiber of our everyday lives, of our relationships, of our self-understanding. We know that it not only reshapes our communications with the way we learn, the way we do business, it also raises fundamental and has yet unanswered questions about what it means to be human in this new emerging digitized world, particularly as we embed digital surveillance technologies into our bodies, our social-political relationships and our lived environments. But the technological infrastructures of autonomy can also be quite banal. To give you just one example, I've been quite ill, as I said, with a heavy cold over the last few weeks, actually, and when I can't concentrate at my desk, I go for a walk in the park around the corner to organize my thoughts, and I dictate the thoughts in my mobile phone. When I'm back home again, I hold my phone on my computer and with the right sort of software, the computer writes what I've dictated. However, since my computer, offensively, doesn't understand my English, it's the next step. In the next step, I have to get the text translated by Google Translate. I'm too lazy, of course, to do it myself. So without the technological infrastructures, I would not have been able, at least not so easily, to write a paper, although I had autonomously decided to do so. Or at least it would have been very difficult. So these supportive technological instruments can be very helpful. They come, however, with an enormous price. Through the most basic and fundamental danger, the most basic and fundamental danger of the technological society is, of course, constant surveillance, endless and relentless data collections. Even Google Translate can't do without this sort of surveillance. And the billions of data people provided with. And I'm sure that all the tasks I ever fed into Google Translate could come back to me in one long, pristine sentence if I asked for it. There are, of course, a lot of very interesting and equally important studies of the consequences of technological surveillance for individual autonomy, such as the problem of manipulation, the problem of commodification, the discrimination of algorithms, and so on. And lots of these topics will come back tomorrow and on Friday. But I don't want to talk about these concrete aspects right now. I want to make a different point. Why did I add the Ankeny Valley to the title Autonomy and its Political Infrastructures? Not only, because I find it a fascinating phenomenon. That's, by the way, very difficult to explain. And now, for those who don't know what it is, the Ankeny Valley is a surprising and shocking valley. You see here? In observing human empathy towards human-like objects. The more they resemble humans, by the way, I didn't check the copyrights for this. So, any lawyers here who could advise me on this? Well, anyway, it's from the Internet. It's probably plagiarism. Okay, so it's this valley in observing human empathy towards human-like objects. The more they resemble humans, the greater the positive response. Up to a certain point, where the objects are self-human-like, but only human-like, that we enter the Ankeny Valley. We feel distressed, we feel emotional uneasiness, even extreme unease towards the objects. That's all, you know, a behavioral empirical research. I'm sure that in the future it will be part of our rather normal world to move in these border areas between being clearly recognizable as robots and theirs, that are so similar to us that we find them Ankeny. Novels such as Machines Like Me by Ian McGean, or Clara and the Sun, by Katsuya Ishiguro, describe such a world very impressively. The most recent example I came across is a small film by Daniel Marcon, the parent's room, and I'll show you just a brief clip, which is haunting and truly Ankeny, not only because of the music and the lyrics. A father has just killed his son and daughter and wife, and is about to kill himself, and this is all taking place in six minutes. I thought I could find the whole film on the internet. It was actually in Venice at the Biennale, maybe some of you went there and saw it. But when I came back, only this short fragment could be found on the internet. And I have real nerds in my group, if they can't find it, it's just not on the internet. So it's so Ankeny because also it's not entirely clear whether the figures are human, papier-mache, or a mixture of both. I hope it works. Actually these are real actors, but I read it on the internet. So Isabella Achenbach, she writes in the catalogues on Marcon, that extreme representative realism evokes a response of repulsion and restlessness. I don't know whether some of you have that. It's of course different when you're in this room, in this dark room where the film is played. But I find this a fascinating example of what could be meant by Ankeny Valley or Ankeny Fields of Experience. It's as yet still unclear what this will mean for the conditions of our autonomous lives. For our decision-making procedures, for instance, for our self-understanding. I want to use the Ankeny Valley as a metaphor for our lives in the whole digital world, not only for our relationships with robots. Ankeniness can increasingly be a successful metaphor for the entire digital world in which we have to move and in which we have to live an autonomous life. There seem to be many other Ankeny areas. One example is the case where we no longer know when we phone with an organisation, whether we are being served actually by a human or whether we are only talking to the computer and algorithms. Because the voices are of course not distinguishable any longer. But also the case of automatic decision-making and the question whether there are or ought to be still humans in the loop is a question which belongs to this Ankeny field. Humans in the loop, does it make sense to anybody? Okay, so it's about algorithmic decision-making and the GDPR. The General Data Protection Regulation says that with all decisions there has to be a human in the loop. I quote article 22. It states that the data subject shall have the right not to be subject to a decision based solely on automatic processing including profiling which produces legal effects concerning him or her or similarly significantly effects him or her. Or similarly significantly effects him or her. So we have the right to have somewhere in the process which will become more and more important of course because the whole legal system is now sort of transformed into an algorithmic system. So you just need the data. I mean it's actually, yeah, it's probably not even so problematic but the GDPR in Europe, European citizens still have the right to have a human in the loop. So in principle we have that right. But there are quite some questions involved. The question is not only as Brennan, Sasa and Levy in a very interesting article on the humans in the loop say the question is not only whether keeping humans involved will improve the results of decision-making, rendering those results more accurate or more safe. Or the question is not whether human involvement serves non-accuracy related values like legitimacy and dignity. The real problem they say is whether we should always be made aware of the fact. Does it matter, they write, if humans appear to be in the loop of decision-making independent from whether they actually are? In other words, what is at stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem from the outside to have such authority? Of course this is also a question that concerns the technological infrastructures of autonomy. If we can no longer be sure what the respective decision-making conditions of our actions are or how we should behave towards others, towards things, towards people, then our self-understanding as autonomous persons is up for discussion. Even if in the end it wouldn't matter whether or not there are humans in the loop. It still sort of puts into question our autonomy. Again, I'm convinced that this is one of the uncanny questions which will become more and more urgent in the not so distant future. Now, let me come to my last point. And what do I have? Oh no, no, I have to wait a little for my next slide. So I want to talk about the politics of the political infrastructures of autonomy in the democratic public. And I want to show how individual autonomy and collective autonomy are politically and technologically intertwined. Why do I have to talk about the democratic public at all in a paper on personal autonomy? Because the autonomous person is constituted, is formed, is shaped in social relations and paradigmatically in the social public sphere. So, okay, no, I shouldn't interrupt myself. First of all, in any case in the social public spheres and in the social public spheres not only in real life, but also on the internet. Because as we know, the social communications are nowadays, especially for younger people, not really being present here. But certainly not, I mean, you know, sorry, no offence intended. The boundaries between the political bourgeois public sphere and the social media can no longer be clearly drawn, I think, if it ever was possible. So the reason why Habermas and his criticism of the public sphere should be interesting from the perspective of autonomy lies in an argument which is in itself Habermasian. The democratic public sphere is constitutive for guaranteeing individual autonomy. Only together the democracy and the individual freedom can they secure individual freedom. Without democratic decision making, there is no democratic legislation, no democratically secured individual rights. And individual rights are rights we have as rights to our freedom. Habermas is famously not only the great theoretician of the public sphere, but also the one who conceptualized the idea of the necessary commonality, the mutuality of private and public autonomy. But I didn't want to discuss Habermas' idea of the co-originality, that's Gleichursprünglichkeit, that's what he calls it, of individual and democratic autonomy. I find it, by the way, with critical comments, of course, a fundamentally plausible normative theory. Individual freedom can only be lived under conditions of democracy. We are always at the same time authors and addresses of the law. Therefore, as an essential part of the democratic procedures in a liberal democratic society, we have to understand its public sphere. That's what Habermas says, and I think he's absolutely right there. It's in the public sphere that the bourgeois, the bourgeois public sphere, discusses and debates issues which are of concern to everyone. We need this debate because this public sphere, it's essential for democratic opinion-forming. What do we think about the Ukrainian war and will formation? How do we decide issues relating to that? This idea, Habermas for the first time articulated in his classic structural transformation of the public sphere in 1961. And back then he said these debates took place in the mass media of the democratic public sphere where we had gatekeepers, journalists, basically citizens who suggested topics and wrote about them. These citizens had a function in the mass media and were respected for that. They still are. I mean, you know, if you think of good journalism, then of course these people are very much respected. They function as gatekeepers. They say what should be discussed and what shouldn't. So let me start with a quote of Habermas' new book where he talks not about the mass media only in passing, but where he talks and criticizes social media. Because this is where the public sphere takes place nowadays. Nope. Oh yeah, this is by the way what I'll talk about. You'll see that anyway. So this is the quote. And it's actually so long that it has two slides. I think. Social media creates freely accessible. You can read that by the way. You don't have to take pictures. It's freely, I don't know about freely. I got it from one of my students who found it on the internet. But it is, I shouldn't say that. I know. They know that. They agree that a professor of ethics shouldn't do these things on the internet. So they do it for me. They profit from it as well, of course. Social media creates freely accessible public spaces that invite all users to make interventions that are not checked by anyone. That's what irritates him and what scares him. And which, as it happens, have also long enticed politicians to exert direct personalized influence on the voting public. We know that. Not only Trump, of course. Everybody has a Twitter nowadays. Or had, should I say that? The plebiscitry public sphere, which has been stripped down to like and dislike, clicks, rests on a technical and economic infrastructure. But in these freely accessible media spaces, all users who are, as it were, released from the need to satisfy the entry requirements to the editorial public sphere and from their point of view, have been freed from censorship, can, in principle, address an anonymous public, sorry, solicit its approval. I do have to go on. These spaces seem to acquire a peculiar anonymous intimacy. He's talking about social media. Facebook. According to previous standards, they can be understood neither as public nor as private, but rather as a sphere of communication that had previously been reserved for private correspondence but is now inflated into a new and intimate kind of public sphere. You really have to understand this, yeah? In Germany it's even more drastic as the last sentence reads, als eine zur Öffentlichkeit aufgeblähten Sphäre einer bis dahin dem brieflichen Privatverkehr vorgehaltenen Kommunikation. Okay. I think, I mean, first I didn't even start where to begin to criticize it, but now I made up my mind. It's a very interesting quote for all sorts of reasons and I will limit myself to a few critical comments. Firstly, what Habermas seems to forget is that social media plays a central role for people's autonomy because social media are, of course, in addition to the most important, most people would say, entertainment function, that the whole thing has also about trying out one's own identity and one's own subjectivity in discussions with others in dialogical context with others. This is how it works, for instance, with all gender issues. The countless small public spaces of social media help to explore gender issues, to bring identities into conversation with others, to even consider what is identity, what is gender, or also what is illness, what is disability, which is why social media has such an important function for individual autonomy. This is a contribution to the formation of identities through discourse, something which Habermas really finds important. He just doesn't think that the social media can provide for this and I think he's just wrong. Also, I think it's a misunderstanding that before the digitization of the public sphere, the public itself was completely inclusive and the same for everyone, formed a unified space for the rational opinion formation and decision-making understood as a communication space for a generalization of interests encompassing all citizens. Nancy Fraser, a critical theorist, rejected this Habermasian concept of the public already 40 years ago in a critique of the first theory of Habermas. She refers to the many existing counterpublics even in the old world. You had feminist journals and all sorts of things. She refers to the many existing counterpublics or small subaltern publics that represent spaces where people who can't find these spaces in the traditional mass media can discuss with each other. Even if one has to assume that this does not always happen in an emancipatory style, I totally agree there with Habermas, of course. So in understanding the public sphere, Fraser moves away from the pretension of equality and unity in Habermas to a plurality of contesting publics. And Fraser also suggests talking about the old public sphere to go beyond the clear separation between civil society and the state since both spheres are inter-prenetrating each other and are both subject to democratic norms. Maybe I'm going a bit too quickly. I just assume that something like the Habermasian idea of the public sphere is in the back of your heads. Habermas would be happy, of course, as she should be. I mean, you know, it's a good theory, nothing wrong with that. It's just, yeah, it just doesn't fit the social media. Della Porter writes with regard to Nancy Fraser that she has in fact pointed to the democratic relevance of not one liberal or bourgeois public sphere but rather of the proliferation of subaltern countered publics defined as parallel discursive arenas, I'm quoting Della Porter, where members of subordinated social groups invent and circulate counter discourses to formulate oppositional interpretations of their identities, interests and needs. Another important point seems to me to be that Habermas's reduction of the debate on social media to likes and dislikes misses, it really misses, I think, completely the mark. Of course we do have this culture, yeah? I mean, you have these Instagram, these billions of Instagram accounts where people only use, they don't discuss with each other, they just present themselves and then somebody else should like it, yeah? I wouldn't even be so dismissive about that because there is some sort of identity formation even in that. Even something, you know, a sort of identity formation we want to, oh, I would like to criticize, but it's there. So, these, okay, I don't want to belittle this Instagram culture but it doesn't make sense to reduce all of the dialogues on social media on likes and dislikes as Habermas does and I'm really not being unfair in the presentation here. It's a short book. He doesn't have, whoops, he doesn't have the younger people, I think, I mean, I learned so much about the internet and about all these structures from the younger people in my group or in my, with my acquaintances. And I think he just had the wrong advisors when he, oh, anyway, that's just my opinion. So, it's, as Nancy Fraser said, not easy any longer to clearly market where the public begins and the private ends. This is another point, of course, which was different in older times. Nowadays, what the Instagram things, for instance, or Facebook, they used to be private and now they're public and in Habermas' theory this is just not taken care of. Although it seems to me so important to understand what's going on there if we want to understand individual and political autonomy. Okay, so let's go one step further and have a look at the capitalist form of the companies which organizes the debates on social media. It's the last comment I want to make here on the theory of the public sphere and it's about surveillance capitalism. Let me quote Habermas again, I hope. Yeah, although for civil society actors face-to-face encounters in everyday life and in public events represent the two local regions of the public sphere in which their own initiatives originate, the public communication steered by mass media is the only domain in which the noise of voices can condense into relevant and effective public opinions. Yeah, you know, it's Habermas who says this. I mean, if it wasn't Habermas, I would not even discuss it here. But Habermas is still really, really influential and for very good reasons he's really, I mean, certainly the most important philosopher we have in, I don't know, the world, I don't know. But this has to be criticized. On the basis of his own theory, this has to be criticized. Okay, one should be wary of the idealization of traditional mass media, I think, as he is here. Just as one should be, of course, wary of the idealization of the colorful internet, I don't want to do that either. Facebook is always private as a company. On the other hand, it creates public spaces that can be decisive for users and their private and public autonomy. The democratic control of the democratic public spheres on the internet is, of course, difficult, mainly because social media belong to capitalist companies. It is these capitalist companies making their money on the basis of the data of the users that have led Shoshana Zuboff to call this form of capitalism surveillance capitalism, which was very well reviewed and criticized in German media. On the other hand, we know that all platforms, even Facebook, moderate content. They don't publish everything. So in a way, there still is some gatekeeper function. The myth of the neutral platform has long been expressed, for instance, by Tala and Gillespie, for better or worse. And that's, of course, what happened to Trump on the old Twitter. There was content moderated. So the numerous efforts to address the role of content moderation, as well as the European initiatives to regulate the digital market, point in the direction that the free space of the internet can be reorganized in a democratic way and made accessible to everyone in the same way. I don't want to be or to sound naive here. I think it's, of course, very difficult, but it's something we should, from a perspective of autonomy, think about. The fact that this is also a capitalist power issue makes things more difficult. With the chaos surrounding Twitter and Elon Musk, we can see that the problem is not that Musk or any other billionaire is the owner, but the problem is that the social media can be earned at all. This is at least how Tom Nicholas, in his video, he has a very interesting YouTube video channel, a critical theorist, an American, very young. He says the problem is that the social media can be earned at all. That it is earned at all and that it's not accountable to their users is the problem. The democratic alternatives, like Masterdon, don't really have any white influence yet, to put it mildly. Habermas writes that just as printing made everyone a potential reader, I quote, today digitalization is making everyone into a potential author, but how long did it take until everyone was able to read? By looking at the users and at the generations of roughly all up to 40 years old, in this paternalistic way, it seems to me that Habermas rubs himself of the opportunity to see the innovative aspects of the various publics to analyze the critical potential and to think and develop them further. And this does not help his or our, especially our, theory of personal and political autonomy. I think it's very important to criticize the social media from the perspective of infrastructures of autonomy. But we have to be realistic there. Okay, the last point now. How can we bind these different perspectives together? Well, there are all bound together, of course, by the idea of autonomy. And we have seen different perspectives on the concept. It's possible limits and the conditions which put it at risk, but which could also be the decisive enabling conditions for and within the democratic public sphere. What is so important to see is that we can only regulate technological developments with a democratic political sphere. The uncanny fields that we will encounter in an increasing extent cannot simply be managed or contained by technocrats. That's what's happening at the moment, of course, mostly. I mean, not in the EU, but anyway. Rather to navigate autonomously in these uncanny fields, controlled, on the other hand, by a civil democratic public, exactly in Habermas' original normative sense, that would be a development supporting autonomy. This is all we can say about this at the moment, of course. I mean, it's a developing field. Against theories which hold that under non-ideal conditions, agents only have rational agency, not autonomy. Or that autonomy is only always the autonomy of white male able people. I think we should just forget that and go ahead with the concept of autonomy. But I want to end with a different note, on a different note, with a puzzle, or maybe a riddle, or maybe simply an open question. Which leads me back to the uncanny valley. The question is also mentioned, as I said in the call for papers. Can autonomy be programmed? It might even sound like a contradictory question, which is not even understandable. But it should be taken seriously and I think it should be answered with a clear no. It cannot be programmed. But the reasons for this answer point us to the direction in which the person, the human being, who has to form the basis of autonomy, should be understood. What do we think about the human being? Autonomy is not programmable, not because simply the free will is not programmable, but because of the limits of machine learning for the time being. I say for the time being and purpose, of course, although I'm also inclined to make a transcendental argument here. The conditions of the possibility of autonomy at all is that it's not programmable. You don't have to go with me there. But that would be in a way an easy way out because it would be a conceptual way out. I don't think it's the freedom of the will in general that should be used as an argument here. But rather the idea of action as it has been famously developed by Hannah Arendt. For Arendt, action is inherently connected with a very specific form of freedom which he also calls unpredictability or irreversibility. Only if we act out of nothing, even without knowing beforehand how we are going to act, is our action really free. This seems to be a dilemma. Because on the other hand we also have to be and want to act predictably in intersubjective relations. You do that at the moment. Nobody's leaving the room. I'm not leaving the room. We act predictably. And how we communicate must be predictable in fundamental respect. Since otherwise we as agents and subjects would neither understand nor be able to interpret other person's behavior. And yet as individuals we want to remain free in our actions. Also have the self understanding to be free in this sense. You're so predictable is not a compliment but a reproach. Only both sides together the unpredictability as well as the safe predictability make out what we call human. And what we call human action and human autonomy. I want to argue that it's the unpredictability which seems to defeat the possibility of programming robots. And this dilemma of wanting to program humans as robots. And the frustrating incompetence to be able to do so is also and much better described by Ian McEwen, the author of Machines Like Me. I mentioned that earlier. This is the last thing I want to do. I want to end with two quotes from his book, Machines Like Me. Just to provide you with some small background on the plot, it's really a good book. Alan Turing and his team have built a really completely human like robot which is for the first time being sold. We write the 1980s in a different world, one in which Turing hadn't killed himself or wasn't murdered. Turing is still alive. The protagonist of the story, Charles Friend, buys an atom. There are 12 atoms and 12 eaves and he wanted to have an eave but he was too late. And has all sorts of advantage with this robot atom and in the end he has to kill Adam. But on the way, Turing explains to Charles Friend, to this young man, the limits of machine learning and the limits of programming a truly human mind. Machine learning can only take you so far. You'll need to give this mind some rules to live by. How about a prohibition against lying? Social life teams with harmless or even helpful untruths. How do we separate them out? Who's going to write the algorithms for the little white lie that spares the blushes of a friend? We don't yet know how to teach machines to lie. And what about revenge? Permissible sometimes, according to you, to Charles, if you love the person who is exacting it, never according to your atom, to the robot. And maybe you remember, you know, I mean it's me talking, not Ian McGoon. Maybe you remember that at the outset of my talk I introduced the concept of autonomy as presupposing self-knowledge. Only if I know myself or can think about what I want in life, can I be autonomous? And can I know what reasons I have to act? This is the idea and we all know, of course, that in our daily practices there's lots of self-deception. It's imperfect, but it's there as an idea. Turing says, the one by Ian McGoon, of course, he knows this too. I think he says that ace, atoms and eaves were ill-equipped to understand human decision-making. The way our principles are warped in the force field of our emotions, our peculiar biases, our self-delusion and all the other well-chartered defects of our cognition. Soon these atoms and eaves were in despair. They couldn't understand us because we couldn't understand ourselves. Their learning programs couldn't accommodate us. If we didn't know our own minds, how could we design theirs and expect them to be happy alongside with us? But we usually deal quite well with the limits of our self-knowledge. It's just that it's a huge threshold for programming robots, at least this is what McGoon argues. And it doesn't help either to deal with and navigate uncanny fields. Thank you very much for your patience.