 Good evening and welcome everybody to our summer edition of the lecture series Making Sense of the Digital Society. I say hello to you on behalf of the Federal Agency of Civic Education and the Alexander von Humboldt Institute for Internet Society. Thank you all for coming to the Hebbel Theatre tonight. The lecture series seeks to address big questions on the process of digitalization of society. Questions regarding the shifting distribution of power, the change of democracy, reorganization of time, new logics of equality, urban infrastructures and digital platforms among other issues. The general idea is to invite leading intellectuals with a European perspective who are able to draw a broader theoretical pictures on these issues and present them to us in an accessible way. We are more than happy to welcome today as our distinguished speaker, Louise Amour. She will talk about the increasingly deep integration of algorithms in our everyday life. How does the use of algorithms changes the way we make sense of our present societies? What happens to those parts of the social world that constitute our life together but cannot be quantified and automated? And finally, what means of resistance do we have at our disposal against unwanted forms of automation? As usual, Tobi Miller will introduce our guest in more detail. He will have a conversation with Louise after her talk and facilitate a Q&A session with you as the third part of this evening. Thank you. I hand over to Tobi. Thanks a lot, Shona Taufman. Thanks for having me once again in this series. Thanks for this lovely turnout and a hot summer stay. It's also the closing night of the theatre season actually here at the Hebel Theater in Berlin. And thank you for the interpreters. You don't see them they're backstage but they're doing a hell of a lot of work here. It's very important to have them here and to give you the German translation. This goes also for the Q&A. We're having after the talk, after the conversation here on stage if you don't feel comfortable with an English question you can ask in German, of course. So we have talked quite a bit about agency in this series, political agency, to deal with platforms, new media and about the European role in all of this. We have heard about AI being very dissimilar to human system references with Dirk Becker, for instance. Today I think we will hear about similar subjectivities of algorithms and humans, similar decenterings of their sovereignty that is. Or to put it more simple, our guest does not only ask what algorithms do and how we can control them or how they can control themselves but asks how do they shape our concept of what it is to be human among other things. Tonight we will hear about the role of doubt in algorithm design, broadly speaking. In this sense, I expect, probably, this night to be somewhat of a turn in this series, since this is not about transparency, not about opening up the algorithm, which kind of algorithm I speak of. The speaker probably would ask right away. Tonight is about opacity, uncertainty, intelligibility, not as downsides but desired categories of thinking and computing. Opacity, doubt in the dark, despite the bright stage lights that make me sweat already even at this early point. So you've heard already a little bit about the structure. There's also a hashtag. You see it up here on stage, Digital Society. We're also being filmed for LXTV. There's going to probably be a broadcast of this session but not of the conversation tonight. So if you want to ask some questions about Twitter, do it now while I am speaking or during the talk itself. We can check Twitter once we get into the conversation. And don't miss the drinks and the snacks after our session at about nine-ish, I would say, it's upstairs in a cafe of home. Our speaker tonight is a professor of geography at renowned Durham University in the northeast of England. Her research focuses on global geopolitics and border control, especially when related to the role of data in risk management. The last couple of years she has worked on a major research project called the Ethics of Algorithms. Much of this research, I again can only assume with uncertainty, will be part of her forthcoming book called Cloud Ethics at the Duke University Press, which I was fortunate enough to be given the thorough introduction in advance. Long before we all have become familiar with what she calls Algorithm Talk, which really has unfamiliarized some of us with the properties of specific algorithms. Long before that she had worked on data wars, new spaces and governing in the European war on terror, a four-year research project in collaboration with the new University of Amsterdam, showing how everyday traces of mobile devices and money transactions have, I quote, become redeployed for preemptive security. Like her forthcoming book also at the Duke University Press is the politics of possibility, risk and security beyond probability in 2013, a book that got a lot of traction. So notions of contingency, uncertainty stay with us in a world where the future is more and more reduced to algorithmic wisdom, which tends to be a single output. But now it is time for our main input. Please welcome from Northern England with her talk titled Our Lives with Algorithms, Louise Amour. Good evening, everyone. And thank you so much, Jeanette and Toby for that wonderful introduction. I would like to also thank Christian and his colleagues at the Institute for inviting me. It's a real honor to be here. It's a kind of lifelong ambition, actually, to speak at the Humboldt. So I'm just delighted to be here. And in particular, to be invited to speak about your theme, making sense of the digital. Because what I would like to do, if I may, is to just shift that preposition in making sense of the digital, to think about what it means to make sense in a digital world. How are digital processes changing sense making so that we make sense of ourselves and our relations with others in new ways with machine learning algorithms? So in my lecture this evening, I would like to explore with you some deep neural network algorithms, don't be afraid. I will take it step by step. And I want to suggest that these algorithms are changing the nature of how we make sense of ourselves. From decisions about a person's credit worthiness in the world of finance, or their degree of riskiness in the criminal justice system, to the life and death decisions about what might be the optimal treatment pathways in cancer treatment, or who should be permitted to cross a border. I want to suggest that increasingly our lives and our life chances are becoming ever more entangled with the adjudications of algorithms. Now, of course, you might say, well, these are very different aspects of our lives, policing, borders and immigration, the health system. And you might wish to say that machine learning is acting ethically in some aspects of our lives and not in others. That might be the direction we might want to go in as a society. So when an oncologist who specialised in a rare form of head and neck cancer told me that his deep neural networks that he felt he was collaborating with, he said they're making possible vast improvements in detection and treatment. And we might say, well, look, here is the good. Here is the ethical use of machine learning for a responsible society. But what I want to get us to begin to think about this evening is how one might begin to draw that line between the good and the bad or what we think of as the unethical and the ethical in relation to machine learning algorithms. So a team of computer scientists that I followed throughout 2017 had been working precisely on new methods for tumour recognition and for the targeting of particular treatments for specific tumours. We might again say here is the good use of machine learning. But they had developed their expertise as a team, also in the detection of what they called problem gambling. You can see a short extract from my interview with them there. So the online gambling company Betfair had asked them to use machine learning to detect the patterns of online gambling and detect what the anomalies might be. They had also worked for two years on object recognition from the video stream data of drone footage for a major military company. In each case, as they described it to me, they said the fundamental thing is we know what good looks like. We know what good looks like. And they said because they'd clustered the data in a way that would show them what normal or good looked like, that then they could detect anomalies. So for them, in a sense, the problem space was the same across all of those different domains of society. They were telling me we know what addicted play online or diseased tissue in the MRI scans of the human body or a civilian vehicle through the video lens of the drone looks like. We know what good looks like. So I want to propose to you this evening that this designation of the good and the bad, which so many societies are feeling they have to respond to, how do we embrace forms of machine learning for the good of society? This designation of the good, the bad, the ethical, the unethical, or even human versus machine decisions is not at all a straightforward matter when it comes to our lives with algorithms. So notwithstanding the widespread public claims that the black box of the algorithm should be opened up, that we should make sense of it, that algorithms must be made accountable for their actions. I want to say instead the prime question should not be how should algorithms be arranged for the good of society because their arrangements are changing the paradigm of what good means in society. We know what good looks like. So rather than beginning with that question of how to make them good or normal, I want to say instead, oppose a different question. How are algorithmic arrangements generating ideas of the good, the normal, the transgressive and the risky? And as I will explain, machine learning algorithms learn about and make the world through their exposure to data and the adjustments, the tiny adjustments and modifications of weights and parameters to represent that data. So we might think of these processes as sense making processes as ways in which sense in our world is being remade. And I want to discuss with you this evening two key aspects as I see them of our lives with algorithms. And for each I want to stay quite close to the processes that I think are being engaged in terms of the development of these systems. And the first will be targeting. So how new notions of the target are generated in relation to algorithms. And the second is deciding what do we mean by an algorithmic decision? What happens to the decision in the context of machine learning? So let me move to an example. Let's make this a little more concrete. Moving from we know what good looks like to we train our algorithm to understand what a protest is. So it is 2016 and I'm watching a tech start-up pitch what it calls protest detection software to an audience of government and corporate clients. The presenter explains to the audience we train our algorithm to understand what a protest is and is not. So it's a kind of system of training for recognition to recognize the gathering of people as a protest. As he's explaining how the system works, the cities that they have used to train the algorithm scroll across the bottom of the screen is Lamabad, Paris, Baltimore, Istanbul. The data from Facebook feeds, Twitter and Instagram are combined with other data sources and government databases as inputs to a series of deep learning algorithms. The presenter explains to the audience that the system is getting better. So he says it's adapting day by day as it's exposed to new data and is modified. So he tells the audience we give you the code so you can edit it. So in the context of for example, boards and immigration authorities this is saying we've had pre-populated model but you can modify and adjust that model for the purposes of your own forms of targeting. So what does the presence of these kinds of algorithms mean for society or for life in the city for gathering and assembling in public spaces? For the people of Baltimore in 2015, so one year before I observed this protest detection software, this was an African-American community protesting the police killing of Freddie Gray, a 25 year old African-American man. And in this context it meant that the Baltimore Police Department and the Homeland Security Department saw what they called the targets of disruptive civil unrest. So the targets for those algorithms were disruptive civil unrest. Terabytes of images, social media text, video, biometric and geospatial data became inputs to their protest detection system. People were arrested and detained. A group of teenagers in high school were prevented from boarding a bus in Baltimore to join the protest because the output of that detection software detected increased chatter from high school students and yielded an output in the algorithm of a high risk will be posed to the crowd. So an intervention was made on the basis of an input of data from those students on their Twitter and Instagram feeds to say that they would pose a high risk to the crowd. Now we might say that to the algorithm it's scarcely matters whether the attributes that are being inferred are those of consumers or voters or financial transactions. But here they were the inferred incipient attributes of people gathered together assembling to protest. These machine learning algorithms having been trained on data from the many contingent past gatherings of crowds in other places and other far off cities were being refined and optimised again in a new city. And this is where I think my point of departure is in terms of thinking about ethics. That it entangles the exposure of us and our data into an algorithm that will have an onward life when it's detecting the clusters and attributes of a future group in another city. And for the protest detection algorithms it meant that the residue of past moments in an Istanbul park or a Syrian city became lodged within the very algorithms that would continue to identify people and entities in other crowds even and especially those who had not been encountered before. And this meant that that algorithm I guess if we really distill it that algorithm had been trained on names like Michael Brown, Ferguson, Freddie Gray, Baltimore. So the question from the perspective of the algorithm was do the patterns in this data share attributes with the clusters we have previously detected? And that's the kind of ethical political social relationship I'd like us to increasingly try to understand how the exposure to us and all of our daily lives is what calibrates what looks like a normal gathering or an anomalous gathering in a city street. And for me this also identifies the key harm that we might be thinking about in this space that the capacity to make a political claim on the future even perhaps in this case to board a bus to make that political claim is undermined by a set of methods that have modelled the attributes in advance. So how does one make a new political claim in a space with others when the clusters and the attributes of those gatherings are already known by the algorithms? So the target output of the algorithm, the output of the machine learning could be almost anything. Let's think about the public debates that we're all familiar with. When Cambridge Analytica deployed their machine learning to target voters in the EU referendum or in the US presidential election, their target was a propensity in that cluster for those people to be both undecided and persuadable. So for me this is absolutely critical in terms of our response to questions like Cambridge Analytica. How does that cluster of attributes become defined as a group of people who could then be targeted with particular kinds of media, far right media and so on, based on their propensities to be undecided and persuadable? It turned out, as I discovered in my research, that the models that were seen to be most useful in defining clusters of people who were persuadable and undecided was the fashion industry. This is the extent to which these algorithms cross worlds. The most developed models were in the fashion industry where the question was how do you target a particular individual thinking about the trends and thinking about their propensity to not quite yet know but to be persuaded? Now I think that means that what is at stake in the public debates is not only, or perhaps not primarily, the predictive power of these algorithms to undermine democracy or to determine the outcome of the EU referendum in the UK, or to undermine a judicial process. Of greater significance than these harm, to me, is that machine learning algorithms are generating the bounded conditions of what a protest, a democratic election or a border crossing could actually be in the world. So generating the actual frameworks within which we imagine those problems. So let's make this question of how a target is generated, a little more concrete. And I want to describe to you a scene from my research where I was spending time observing in a laboratory of computer scientists who were working on algorithms for border and immigration controls. And one of the developers described to me how his model was trained on past borders and immigration data. He talked about playing with his deep neural network algorithm, taking his experimental model to the uniformed border operations team in the building next door, to ask of the output that he was getting from the testing of his model, is this useful to you? Now, why does this matter to me? It matters because you've got this collaboration between government authorities and the algorithm designers. But you've also got this malleable sense of what the output might be and whether it might be useful. So we might say here, I've described this as a space of play. So what these algorithm developers were doing was trying to get the outputs of their models to converge on the target output. And they were liaising with the border and immigration's authorities to try to close the gap between their model and the target output. They would shift the weights in these hidden layers to move this space of play. So in a sense, I think what this computer scientist was asking the border operations, when he was saying, is it useful? Is he was asking whether his model fitted their view of the world? So this is what I'm getting at when I'm saying that this means that we're changing what we mean by immigration or by borders or by the judicial system or policing decision. That the algorithm is a participant, a participant, I think, in making that space of the problem. And the output, let's say it's 0.62 or 62% of probability, how meaningful is that? So when a police authority or an immigration authority decides that over a certain threshold, some people will be stopped and asked additional questions or somebody else might be detained. Shifting these weights in the hidden layers will change the output. So you might say, well, why does that matter? The output, which we might think of as what's often described as an algorithmic decision, is actually a very contingent and fragile thing. So much action happens on the basis of it, but it's very fragile and contingent. It makes the world, but it does so in a way that is exposed to modification. So in one sense, this example of developing a deep neural network algorithm actually embodies multiple possibilities, layers of probability weightings that are beyond the threshold of understanding, even by the developer. But the multiplicity that's contained in all of that modification is then reduced to one, to a single numeric output between zero and one. And when we hear optimization or an optimized target, for me, that is about the reduction of a multiplicity to a single output that can be actioned. Now, I think that matters greatly because so often in our societies, algorithms are being discussed publicly as a series of programmable steps or as a recipe of instructions, in which if we got an unwanted output, so if the output of the algorithm was seen to be a kind of racialized targeting, for example, or that it had gendered prejudice inside it, there's a sense then, if we understand it as a series of rules, that one could somehow fix that series of rules to adjust the output. And I think that's enormously problematic because actually, these algorithms are generating rules from contingencies. Take the use of facial recognition systems for identifying targets in railway stations or at the border. If we're to think about the process of deleting that data and say, well, within 24 hours, all of those faces that the algorithm has been exposed to will be deleted, still the algorithm itself is being modified through the exposure that it's had to those faces. So even if we're deleted, the traces of our lives remain within the algorithm. So in every iteration, every contact with the world, every deployment on a train station at the border in a city square or in our credit rating, for example, these algorithms are generating the conditions of what or who could be recognized and targeted. So once more, I think that this means we have to change the questions that we are asking. Whatever could it mean to govern algorithms or to regulate algorithms when they seem to be governing us in new ways or suggesting and proposing new ways in which we imagine our problems and our political decisions. Now, of course, much of the debate here has been about placing a limit on the actions of the algorithm. So in May 2018, quite famously now, 3,000 Google workers signed an open letter to their CEO denouncing what they called the weaponization of object recognition. And they sought to define a limit on the use of their algorithms at the threshold of war. So you can see there writing to their CEO, Dias, and we believe that Google should not be in the business of war. But of course, my point is that I want to say the methods that I've been discussing here will precisely pursue those targets and decisions of war by other means. And so these workers at Google could completely draw that line at the limit of militarization. But nonetheless, some of their most recent research on object detection is absolutely intrinsic to the programs that we think of now as being most militarized. So I think that means that we could not simply exempt war or draw a line separating good use of deep neural networks like tumor recognition from bad, from autonomous weapon systems, for example. Because these algorithms are actively generating ideas about what good looks like, what is normal and anomalous. And they are finding quite new ways to do this. So in many senses now, number is not seen as quite enough in terms of generating new targets. So something of a holy grail in machine learning now is to close the decision gap so that it's not only a numeric output or a risk score that emerges from the algorithm, but that it is a sense of meaning or that it is a series of sentences or lines of text that somehow give apparent context to the human being who's actioning the decision. So from the neural networks used to read MRI scans in hospitals to border controls and counterterrorism, what I'm seeing this computer science do now is reframe what an algorithmic decision could actually be in the world. So let's just look at one for a moment. The so-called show and tell algorithm. We call it show and tell, the computer scientists explained, and they said, it is no longer simply about recognizing an object or recognizing a face, but what they called scene understanding. So they said, a description must capture not only the objects contained in an image, but it must also infer the scene, express how these objects relate to each other as well as their attributes and the activities they are involved in. So how objects relate, their attributes and their activities, the algorithm, we might say, is making some things matter. So here it's not only a case of humans collaborating with algorithms, but two sets of algorithms collaborating with one another. The first, a set of neural networks to decide what matters in the scene, to recognize what's happening, and the second, to use natural language processing to generate lines of text that can be then read by a human operator. So to make the output actionable, what might an output mean in the context of an airport or a gathering of people in a city street like Berlin? Let's have a look at their test image and see what it is that they're claiming. A group of people gather together in a marketplace, and we might say, well, their relations are uncertain and indeterminate. We cannot possibly know from the image how these bodies relate one to another, how they inhabit that space, what their attributes or qualities might be, perhaps especially to infer what they might be doing. And yet, from the multiplicity of relations in the scene, the people, the vegetables, a kind of clouded backdrop of market stalls, these two sets of algorithms are generating a single condensed output in the form of two sentences. So the output here is a group of people shopping at an outdoor market. There are many vegetables at the fruit stand. Just hold for a moment, the mistake of vegetables and the fruit stand. So the algorithms are generating a field of meaning. They are the mise-en-scene of the space. They decide what is to be in the output. And yes, they mistakenly infer vegetables from fruit stand because of the past data of objects they have been exposed to and trained on. But even the error is useful once we think about deep learning in this sense. So you might want to have a little look at some of the output of these test images using the same algorithm as the market stall. My favorite is the man playing the violin, where the output of the algorithm is a man wearing a hat and a hat on a skateboard. So we might point to that. We might say, look, here's the error. Here's the reason why we could never have algorithmic meaning in the world. And yet, how are the computer scientists understanding it? They're saying examples of mistakes where we can use attention to gain intuition into what the model saw. So even the error is incorporated back into the capacity to think about what the model could see and what the model could generate. And I think that we need to speculate with some of this technology for a while and ask ourselves, what's the gap between a man on a skateboard and other sorts of outputs of meaning, sought at the US-Mexico border or in the city street? What would it mean if this kind of algorithm were to output at the US-Mexico border, here is a woman holding a child at the border fence or in protest detection software, in generating a text sentence or a series of sentences in place of a number? What would it mean if the police received the message, here is a placard being held in Tahrir Square and there is an unauthorised crowd in the foreground? So what I'm suggesting is that the output of these show and tell algorithms is reducing and distilling the intractable difficulties, the politics, the duress of living and the undecidability of what could be happening in a scene. There's a lot at stake, I think, if a convolutional neural network shows that a vehicle has a 0.65 probability of being a military convoy and then a recurrent neural network is telling that this is an actionable threat, what is the place for some other alternative output? What is the place of another probability or likelihood that this is a civilian bus in the scene? So how do we think about all of those rejected pathways that were excluded from the generation of the output? So these neural networks are converting data into feature vectors that can be recognised then as similar to or different from those present in other scenes and situations. So what matters to the algorithm and what the algorithm makes matter is the capacity to generate the output, to be able to tell always what is latent in the scene. And it's precisely this emerging alliance, if you like, between algorithms working on attributes to show and to tell and a modern politics that I think is trying to anticipate as I wrote in my book, The Politics of Possibility, that this modern politics is trying to anticipate the latent propensities of populations. So I think perhaps here is one of the departure points in terms of this not only being about a process of capital or marketisation in terms of introductions of algorithms into public life, that there is an alliance between the detection of propensities by the state and the detection of opportunities by the market for these kinds of algorithms. And for me at least, this is a profound ethical, political challenge for our times. What we do, I suppose, how we respond. And I think that this is an undeniably difficult problem. You can understand, you know, I think why there's moral panic in the media about particular sorts of algorithmic decisions. So around autonomous weapons or autonomous vehicles that might in the future make decisions about life and its value. But I want to suggest, though, that what it means to be human, to decide and to act, is also changing in relation to our interactions with algorithms. So that means, in terms of those show and tell algorithms, that the border guard, the oncologist, the judge in a trial, I think will increasingly understand themselves and their expertise in decision-making differently because of how that is collaborating with algorithms like show and tell. So just a couple of very short extracts here from interviews, but here an oncologist who's sort of agonising over how that space of decision confronted with the uncertainty about what might the optimal treatment pathway be for this singular individual, patient in front of them. And this oncologist is acknowledging, if you like, the limits of what he can know. And I think also, in a sense, welcoming partly the presence of what he thinks of as the past experiences of multiple other physicians that he imagines to be present in that treatment pathway software. So almost as if these past decisions are lodged in the algorithm itself. And yet, all of the time, haunting his sense of what can be known, the difficulty of deciding something against the flow of the output. How would one decide against the flow? What could it ever mean then to have what we think of as a human in the loop, as the guarantor of ethics in the algorithm? If that human in the loop is understanding their relation with the world differently, their decisions about their patient is that they understood differently in collaboration with the algorithm. So what do we really mean when we say there should be a human in the loop of machine decisions? In a sense, perhaps the human is in the algorithm all the way down and the algorithm in the human. So there are then, it seems to me, some persistent problems with the very idea of algorithmic decision. Not least that there is no human outside to act as the guarantor of the good. They're always already also inside that new framework or paradigm of knowledge. So then there's no decision as such in what societies have begun to call algorithmic decision. There are outputs. Yes, there are outputs. But there remains, I think, a space of uncertainty, an important space of uncertainty between the output of the algorithm, a calculus, and a decision that we might consider to be worthy of the name of decision that is taken in the context of profound uncertainty of the multiple potential outcomes of that decision. It could never be reduced to one in those terms. So whether to give someone a mortgage, a job, medical insurance, whether to stop and detain someone, perhaps every space that we think of as having an algorithmic decision, we might reflect on that gap between the output and a decision worthy of the name. I think that's a space that we ought to be paying more attention to. I don't think it's automated when we have this sense of the problem is autonomous decisions or automation. It is, instead, I think, already fully ethical political. It carries within it values, assumptions about the world, past things it has learned through its exposure to other people and other places. And so you might, at this point, reasonably ask, well, then what do we do? What could the response be? If we're not looking for some sort of notion of opening something to scrutiny or securing the algorithm with the human in the loop? So at the time that I have remaining, I want to just sketch out the parameters of what I think we could be talking about. So it seems to me that in our contemporary moment, when targeting and deciding is taking place increasingly in collaboration with machine learning algorithms, that the reflex response, public response, is often to say, well, these are autonomous technologies and they are unaccountable. They are machines, if you like, making decisions beyond the human capacity for scrutiny. In the terms of your themes, we cannot make sense of them, that they are somehow concealed from us. But to draw to a close, I want to suggest that actually the harm done is not primarily the seeding of human control to machine decision. Now, the principal harm, I think, is a specific threat to the notion that we live together and we decide, uncertainly, in the face of difficult and intractable dilemmas. And that is politics, I think, that is political life. So the claim to secure against uncertain futures with algorithms forecloses other potential futures, even where the neural net itself, as I've described, embodies a teeming multiplicity of pathways that were not taken. So when the algorithm condenses a single actionable output, I would like us always to remember that this output signal lies behind actions, like risk scoring at borders or the potential future of a child in relation to social services. Increasingly in the UK, this is being used to make differentiations between different levels of at-risk children in society. Decisions on detention, on immigration or on the dangers of a gathered protest on a city street. So for me, there can be no algorithmic accountability in the enlightenment traditions of transparency or clear-sighted account. That means no way of having a code of ethics that we might say all algorithm developers and designers should sign up to. No opening of the black box. Instead, I think we could demand that algorithms give a necessarily partial account of themselves. It seems to me that this is not a new problem, in a sense. The impossibility of giving an account is the precondition of politics of the difficulty of decision. Philosophers and political theorists have been talking about that for a very long time, the impossibility of that clear-sighted account. So in some ways, algorithms don't pose a new problem, but they do expose very vividly a persistent problem of grounding ethics and responsibility in ideas of objective sight and knowledge. As the philosopher Judith Butler reminds us, we do not reach the limits of ethics at the edge of intelligibility where we can no longer make sense. On the contrary, she says, it's at the limits of what can be rendered intelligible or known that ethics becomes most crucial. So in your terms, perhaps ethics begins with sense-making, with imagining different ways of sense-making. So at the heart of my call for a different mode of ethics, that in my new book I call a cloud ethics, there are three proposals of a kind, and I'm going to conclude by just mapping what they are. So first, I'm proposing we must rethink what ethics means in relation to algorithms so that it is no longer a question of imagining that we stand outside and adjudicate on their behavior. The philosopher Michel Foucault in common with many other political theorists distinguishes different forms of ethics, and so on the one hand he talked about the code that determines which acts are permitted and which forbidden. For me, that could almost describe this sense that there's a Silicon Valley problem, and if we could just delineate what could be forbidden and what could be permitted, that we would make some kind of progress. And I think that his sense that this is a limited form of ethics perhaps comes from his own work on the governing of norms in relation to sexuality. So he distinguishes that from a different kind of ethics. He describes as the inescapably political formation of the relation of oneself to oneself and to others. So the inescapably political formation. And this distinction I think could be crucial because which acts are permitted and which are forbidden I think is a sheltered form of ethics in which we will be continually trying to adjudicate on when algorithms step over the line. Perhaps we might talk about that some more in questions. But actually, what are they doing in terms of what I've described tonight? They're functioning precisely through a reorientation of selves to selves and others through this inescapably political formation. And I think that's what I want to try to work with this sense that algorithms mean that we are still struggling with this sense of a political formation and what might it mean in the context of machine learning. So then second, I would like to think that we could reconsider the output and what it means in our world. That we might be able to reflect on it differently so that the output of the algorithm is never understood as determining a decision. And so I would like us to think instead of thinking of outputs, I would like us to think about something like an aperture. So for those of us who are working on notions of aperture from the arts or photography, the aperture is always both a closure, a reduction in the closure, but also an opening. So it might be able to open out and think about what were the other alternative ways of reasoning that might have been present before that closing down took place. We should make some trouble, I think, at the aperture as the feminist Donna Harroway suggests. We should stay with the trouble and, as she describes it, follow the threads in the dark. So even in the face of reduced outputs, I think we could consider the traces of rejected alternatives. And for me, this has been a kind of thought experiment. I've tried to exercise it in relation to facial recognition biometrics, for example. So you might know that in April, a Brown university student in the US who has Sri Lankan parents, Amar and Majid, was identified, misidentified by the Sri Lankan authorities as having a link to these Sri Lankan bombings. And the apology that was issued by the Sri Lankan authorities said the facial recognition system misidentified her, by which point, of course, she had death threats in her inbox. Now, I think if we see only that the output of the algorithm as being a mistake or misrecognition, we might then lose sight of what I think could have been going on in the aperture, which is that when she was a teenager, she wrote an open letter to Donald Trump expressing her concern about the targeting of Muslims. She also has a project called the Hijab Project. There are multiple other forms and lines of narrative and story that make that designation of her as a risky person not a mistake or an error at all. Does that make sense? If we think with the aperture, we don't say it's just made an error, a mistake, it can be fixed, we can modify. Instead, we say, well, actually, what were those other potential links and correlations that it was working with? So that would mean if we were to go with this thought experiment that every time we're confronted with something like an algorithm that says here is an optimized output, this is optimization, that our first thought would be, yes, and what were the rejected alternatives? What other forms of connection and being together that are not already explained might be present? How could the output have been otherwise? What are the bifurcated pathways that continue to run beneath the surface of an optimized solution? And finally, the weights. I wish to make the weights in the deep learning algorithms a lot heavier and more burdensome. And a few generous computer science friends of mine have urged me not to pursue this and have said that this is not something that we should do. So they have described it to me and said, look, the adjustment of weights is an impenetrable process that retains its opacity even to those who were undertaking it. So one of them said to me, you cannot make the weights political, Louise, because they're not really a thing. We don't know how they work. We are just messing around with them. But it's exactly, I think they didn't realize at the time I think how this was music to my ears that it was something that was opaque and that they were messing around with them because it's exactly this kind of opaque, messy and embodied experimental relation to the algorithm and its data that interests me. As Butler says, a certain opacity persists. So when the judge, the oncologist, the clinician or the border guard decides with algorithms, I think they also necessarily don't know how they work. They are just messing around with them. So some of the most fundamental political and crucial decisions of our times are being made, I think, through this modified and fungible notion of what can come to matter. And so there it is, I think, our lives with algorithms, the inescapably political formation of relations to ourselves and to others. Thank you. Thank you so much, Louise, for this very concise talk. And plus, I don't think we've ever had anybody who was that punctual, I mean, to the minute. It's nine o'clock, so we have ample time. We don't need the whole hour, I guess, for this conversation. There's so many points you touched on. Let me start with the bird's eye view and then get into a little bit more detail. Would it be okay to say after your talk that what you call the latent possibilities that algorithms are after with a single output are sort of robbing us of a sense of an open future? Would that be the bottom line of your central argument? Yes, I think, it's my microphone one, yes. Yes, I think in a sense you could interpret it as a kind of robbing of alternative futures. But for me, I suppose there's a need to think about that as a kind of double political foreclosure. So on the one hand, there is a, I think, undeniably, a kind of political, practical, pragmatic aspect of that. So I was kind of serious when I said, or even to board a bus to make that claim. So I have been thinking about moments of past significance historically in terms of people assembling and organizing to make a political claim that's not yet registered as already having a body of rights attached to it. So we might say the civil rights movement, for example, are apartheid. And to ask myself, well, what role would machine learning algorithms have in contemporary struggles, if you like, to make a claim that's not already registered? So there's that aspect. But then there's this kind of spatial foreclosure in the condensing to one. And I think that for me that's, I suppose politically there are more possibilities then for us to think beyond just the prizing open of a black box. Because what becomes significant is actually how does that multiplicity become reduced to that single output? Well, algorithms, as I understand it, especially in this year, is something that is much more simple than I thought at first. They're mainly concerned with statistics, with probabilities, with single outputs. We've heard a lot about that tonight also. And you make an argument in the introduction of the book, Cloud Ethics, you were referring to in your talk. Also, you start by citing Wittgenstein and then say, well, algorithms discriminate by nature. That's how they get traction. That's what they do. That's what defines them technically, so to speak. It's just a technical question, I guess, now to ask, now how would we reach more moments of aperture, of uncertainty, of future alternatives when those algorithms are actually trained exactly to avoid that? Because the result, the output has to be single. How to circumvent that sort of structure of the algorithm, as I understand it? I mean, I think that this question of intrinsic, I think that's the word that you use, that somehow it's intrinsic. And I think that that does matter, at least it matters to me that this seems to be the terrain, that even the most critical voices are occupying. And I have enormous respect for that work. That's work which identifies the particular kind of racialisation that's happening through, for example, facial recognition algorithms. And many other examples of that kind of important work. But I guess I have a concern with the sense that one could ever extract something like bias or discrimination from the algorithm and render it something that you might call neutral. So I guess that is my point in terms of thinking about the algorithm as always already ethical, political. That it somehow then shifts our public discussion from how do we rid it of these dimensions, towards saying, well actually it will always discriminate even where you think that the inputs or the training data or the test data, that hasn't happened. Let me give you an actual example of that. So let's say that you have a police force using a facial recognition algorithm and they realise that they're getting too many positives in terms of the output to actually be able to do anything about that, to be able to action it. So then they would move what they understand to be the slider in their interface, which we might then say, well no, that's not just a slider, right? That's a threshold, which means that that sense of, well are you crossing a threshold into 0.62 or where are you on that? That that's modified. So that means that their judgments are then also enfolded back into the algorithm. So I guess what I'm saying is maybe we should be quite careful about where we imagine the algorithm itself begins and ends. It doesn't begin with the source code. It doesn't begin with the sketching of the model. And then that also opens up the question of how could then we adjudicate it, given that each, I guess you could say, every time it's deployed, this is also a training opportunity, right? So every deployment modifies the algorithm. So how would one ever successfully excise those sorts of forms of what we might call bias from the algorithm? We'd have to think differently, I think, about how it's learning. There's one word you've used in the introduction. I don't think you used it tonight in your talk. I think that is very interesting because I think it draws from speech theory, actually. Something Judith Butler draws a lot from that you've referenced tonight. This is called the iterative process, or literature in British English, I guess. So of an ongoing process that is never completed. This is a call you make for our understanding of algorithms to change, maybe also for the algorithm to change. Here is an ethical political tension that is worth holding on to. You right there. I'm still just wondering how this would be possible for the algorithm to achieve. Is iterative, this process of repetition of modification all the time that actually calls for alternatives, as we talked about, how can an algorithm or we learn to interpret the results of the algorithm in an iterative way, in a sort of way that is informed of the past and the future to open up that sort of future. You were talking about that. That's just such a good question. I think maybe my starting point there was also a sense of, well, where are we? How are we responding to that? And I think that, and also in the book, I tried to sort of work through what it is about the idea of securing the authorship of the algorithm. So let's move away from iteration for a moment and say if one could identify authorship in some form. So we might say that this could lie behind even some very good critical work, like the work of Kate Crawford, because it's linked to this notion that one could more successfully train computer scientists to be aware of these things. And I think that that is a sense of, well, the author has a set of locus of responsibility. But also, I think in practical political interventions like the New York City Council made in relation to its Algorithm Accountability Act, where it was trying to say, well, actually, if the algorithm's output has some sort of public consequence, then the source code itself must be made available. And I thought that was so interesting that it's not just the author, but it's also, okay, well, actually, maybe it's really located in the source code. And so, again, we have lots of philosophical resources for thinking about authorship. And the problem of authorship, not just of algorithms, but of anything, is that all forms of authorship, we might say, are already distributed. So your question about what that might mean for our analysis of an algorithm would be that the iteration would mean, instead of thinking author, we'd think writing. We would think a sort of a process, an open-ended process of writing in which there are multiple writers at work. And I think that might sound sort of abstract, but actually, I think it does undercut some of the sort of reflex critique around, for example, privacy, in which you might say, well, if it doesn't have access to my data, I won't be an author, it won't be using my data. But as I explained, it doesn't need to, right? It can, the attributes and the clustering of attributes, I think means that it doesn't need to have data that's affixed to a particular individual. It's about some small pieces and fragments of one person in correlation with another. So I think that there are actually important kind of political potentials in thinking about writing in relation to algorithms. So that's through its relationship to data, not just training data, as has been very well illustrated by people like Joy Boulangwini in her work, for example, but writing in and through every one of us. Yeah, I guess that's what I mean by our lives with algorithms. Writing as in doing instead of making, maybe for one thing, right? But where does that leave legislation or what kind of legislation can deal with that different kind of process? You also outlined with reference to Michel Foucault, you know, the death of the author. So if it's not about the author of the text or the code for our purposes tonight, so to speak, as you argue about its structure, what does that mean? What kind of court? What kind of legislation would that be that deals with a more processual, process-oriented kind of writing or doing? Yes, and of course, I found that question, especially a lot from the legal profession, where they would say, well, what do we do? How do you interrogate this as evidence in a courtroom? What could it actually practically mean? And I think you probably already have seen in the introduction to my book that I was once really pressed on that point by a lawyer and they said, well, what would it look like, Louise? What actual form could this take? And I thought for a while and said, well, it would be a very crowded courtroom, right? That it is about, it's not about diminishing the responsibility of particular, nameable individuals, but it is about trying to intensify and bring into our understandings of ethical, political responsibility a much broader sense of collaborators. Let me give you an example, because that sounds a bit abstract, doesn't it? But I followed for some time a group of surgeons who were using the Da Vinci surgical robot and they obviously weren't involved in also using neural network algorithms though. They didn't all realise that that was what was happening. Now, of course, in the backdrop to that, there are pending legal cases, many of them. And our, I suppose our sort of moral panic around that would be, look, here is the example of the autonomous machine that makes a mistake. So the surgical robot that punctures a vital organ or makes an error and mistake that has a fatal effect on the patient. But what I thought was so interesting about the surgeon's engagement with their robots was how they started to understand themselves differently. So in one case, a surgeon described it to me as I was talking about how she makes decisions and she said, we, and I said, do you mean we, you and your team? And she said, well, yes. And obviously, of course, the robot too, right? So this was, you know, and I think her sense of the conditions of possibility of her being able to act in relation to her patients was absolutely changing through the collaborations with the algorithms. It was not just the kind of physical movement of the robot. It was this sense of what could be known about this particular rare tumor in relation to 5,000 previous surgeries that had happened in different parts of the world. So she understood, I think, that that input data meant that there was the presence of others in the room. And so then I would say that taking to court one particular tech company or one particular manufacturer of a particular robot doesn't quite do justice to this life I'm trying to describe in which we are all implicated. So I must, I suppose caveat that to say that, I've been pushed myself on these questions even before the select committees in the UK Parliament where elected officials have said, so do you think then that these machine learning neural networks should be banned in relation to criminal justice? Should they? Yes, no. And you know, if what you want me to do is say tonight yes or no, I guess I can't because my concern is that if we say no, we still allow for those very models to have an onward life somewhere else. So if we imagine we've dealt with the problem, they will be used elsewhere. And if you read carefully the computer science papers around the use of these sorts of convolutional neural nets, you'll see that in areas of our life where there seem to be inadequate data, everything actually from liver transplantation where it's thought that there's not enough data to know what would the optimal choice be right through to counterterrorism where you might say, well, there are insufficient past events to ever be able to really model this. Very, very often in that computer science literature, the solution is we've got a pre-populated data model that's been used in some other domain and now we can modify it for the purposes here. And that's exactly the kind of onward life of the algorithm that I'm trying to get to grips with and it would not be ever fully dealt with by saying we can prohibit it here and we can permit it there. So I think somebody, a famous philosopher, Jacques Derrida was once asked, well, surely we need human rights? You know, you might say, well, surely we need to limit the algorithm? Well, and he said, you know, well, yes, we need them, but they are also in need, right? They're never sufficient. So yes, we need legislation. Absolutely, facial recognition technologies are being used in the UK with no regulatory framework, with no oversight. It shouldn't be happening. But if we had that legislation on its own, it would never be sufficient. The job wouldn't be finished, right? I understand this is a call for complexity, but let's go back again for a moment through a detour over Butler again to the protesters in Baltimore. At least I tried to do that. I've read that elsewhere in Butler's work. I think it was Excitable Speech, Haas spricht in German 1997, where she talks lengthy about agency and one of the things that really stuck with me over more than 20 years now is that she said agency begins where sovereignty wanes. This is something that sort of had a ring tonight, too, in your talk, I think. So in our context, the question would be which sovereignty, the sovereignty of the algorithm or the sovereignty of the coder or of the legislator or of the targeted person that is on the other end, you know, at the output, so to speak, which would be the protester in Baltimore? What's he or she left to do when agency begins where sovereignty wanes? Well, there's the sovereignty question, which I suppose for my own work that I spent a long time with the sovereignty question in the Politics of Possibility book because then I was precisely concerned with, well, how do particular contemporary forms of sovereign authority start to use methods at that time, much more like data mining, to be able to newly enact their sovereign authority? In other words, using forms of data and algorithms to kind of intensify particular forms of sovereign power. But then, I mean, I guess I'd have to ask myself, is that now adequate to the task? And actually, I'm not sure I know the answer. I mean, because to an extent, what I was writing about there were still rule-based algorithms. I mean, really, those sort of if-and-then formulations, in terms of your themes of making sense, I might have then be tempted to say, well, actually you could make some sense of that if-and-then. So have we got at our disposal in social theory, in political theory, in all of the work at the moment around the digital, do we yet have a kind of adequate way of thinking sovereignty in relation to that? I don't know, how many different answers we'd have to this? I mean, I don't think so. I don't think that, for example, Benjamin Bratton's account in the stack is quite right in terms of its relation to sovereignty, why not? Because of these sort of abductive forms of logic, because it's not about the rules and the rule-based form of the model. And because of this sense that we have a space of play in relation to the output, and then we modify the model in relation to that output, which is not quite abduction, I recognise that, but I still think it means we have to rethink the relationship between sovereignty and knowledge. And maybe that also means that our sense of performativity isn't perhaps also not quite, I don't know whether that's right, but that sense that, well, with Butler, it produces the effects that it names, right? So is it that the formulation of the problem in the algorithm produces the effects that it names? Or is it that actually it begins with the effects? Like I was describing the immigration and border controls model, which absolutely had a life in the world, which was about violence and sovereign violence. It was about making adjudications about who can and cannot be considered to be a fully recognisable human being at the border. That algorithm was absolutely intrinsic to sovereign authority. But did it produce the effect it names? Or is it something even more dangerous where actually the effect that it names then reframes what we mean by the state, what we mean by the border? And I don't have an answer to that. I'm sure there were lots of PhD students in the room who were writing brilliant thesis about it. No, I think that, I'm serious about that. I think that some of the work of kind of people coming through writing there, I've examined five PhDs in the last 12 months. And I'm so encouraged by the work of young people coming through who really do have a grip on this sense that the sorts of categories that we use, population, individual, state, sovereignty, that actually that we need to think those differently if we're going to have any kind of critical purchase on this digital society problem. Just one last question. Again, to Baltimore, actually. Could we say that, let's take it from the view of the protesters you had in your picture. Would a way to increase their agency, so to speak, to actually limit their own agency? And this leads now to a really stupid question, I guess, by saying, just turn your cell phones off. Actually limit your traces, limit your tractability, limit your possibility of being recognized. Would that be limiting your agency be a way to a new agency in this very concrete context now? Thank you, I'm so glad you asked that question. Because that sense of, well, can one withdraw? Can there be a mode of resistance to what I call in the book, this regime of recognition that withdraws and that then opens up possibilities for assembly that were not present? And I think still we hit problems. And actually it's something I've looked at much more recently because in the, you might have seen in the UK press that an individual was arrested by a police officer for concealing his face as he approached a railway station where they were trialling facial recognition algorithms. So he saw the sign that said this is what was happening and he concealed his face. And then he was fined actually. So I was interested in this and I thought, okay, so what does that mean? Is that just about violence in public space and this kind of, you must expose somehow yourself to this, if you want to call it a form of surveillance. But what I did instead, which I suppose has become a bit of a reflex with this apertures thing, is go to the computer science and say, okay, what are they doing about concealed faces? And that's where I found what I think is the political, the real kind of political traction in this because I, and actually I should have included some of those images in these slides because it's really powerful. It's training these convolutional neural networks on concealed faces. So it's using a so-called open source data from the internet of faces that are partially covered for any number of reasons that I'm sure we can think of many of them. And training the convolutional neural net to be able to modify its output even where the face is concealed because at the level of the pixelated data, you only need the patterns. So in that sense, what does it mean? It means the technology is kind of, in a way, disinterested in the face itself. So all the, again, the critical reaction which says, oh well, this is bad because it's about human faces and that's so personal, that's our personal data. Actually, it's worse than that. The convolutional neural net is indifferent to your face as such. It can be trained on partially concealed faces and that's absolutely what's happening in the 2019 computer science journals that I'm sure everyone's reading. But I think it's a crucial point because it means that if our reflex in resistance is to withdraw or is to conceal that actually we're kind of missing the point of the kind of politics of the technology itself. And the same perhaps would apply to individualized notions of privacy. That if I can be sure that my image at King's Cross Station is deleted from the police's database in terms of the training of the algorithm, that somehow I have withdrawn. And I want to say no, that's not the case because what happens in that 24 hours is the sensitivity of the algorithm is adjusted. Its performance is assessed and it's modified. So again, it demands new terms, new ways of thinking about the relationships between individuals and the state and technology. Yeah, but thank you for that question. It's really provocative. Oh, it's really straightforward answers. Thank you for that. So, it's time for your questions or comments. Yes, let me gather up. Oh, there's people up there also. Yes, there's a gentleman in the first row. I can hear myself. Yes, you can hear me. A little bit of criticism, a very German criticism. They mix very clear algorithms with neural networks. Sorry, I'll make it in English now. It's a totally hell of a different thing to talk about neural networks or to talk about classical algorithms. To make a little example here, let's say you want to make a machine giving a probability of someone being criminal again after released from jail. So you can use the classical algorithm that means that you use knowledge that you already have, let's say from social science, that gives you, for example, the fact that someone is black has a 20% higher probability of being criminal again, et cetera, et cetera, et cetera. First of all, if it's open source, everyone can read it, everyone can criticize it and everyone can just... A neural network is a black box, actually. You train it, you think what factors will be really important for your result, that means the probability of being criminal again. Maybe the rays, maybe the skin color, maybe the sex, whatever, age. But you don't know what the neural network is really doing. It's totally black box, it's not transparent and it's a totally different approach. And you're totally mixing up these two issues. You're mixing up, you're talking about neural networks algorithm. It doesn't make sense for me, it's totally not. So I think I agreed with you up till the very last moment, which is that that's exactly the distinction that I was making between rules-based forms of algorithm in which one could, I think I said, could perhaps make interventions that were precisely about intelligibility. So I am making a distinction between deep neural network algorithms and the two specific ones that I was talking about tonight were convolutional neural networks and recurrent neural networks. And for me, exactly what you said, that is the point. And I wouldn't call it the black box because there's still this notion of then treating it as something that's concealed as though other aspects of our social world were simply available to us to observe, which I think is really problematic when critical social scientists say, well, this is black box. And I say, well, what about the other things that you research? Are they completely available to you? Do they carry with themselves a sort of explanation in the world that can be understood? So maybe where we disagree is I'm saying that we then precisely should be saying that these are part of our research material. They're black box from us, but they're also absolutely integral to the new ways in which we're understanding ourselves in the world. I mean, if what you're saying is we should be really specific about which algorithms that we're talking about and when, then I'm completely with you. And I think perhaps I haven't resolved that problem myself because what happens with an audience, and I've discovered this in lots of different places, is that the greater the level of detail that we enter into onto, for example, how in 2012 convolutional neural networks transformed with ImageNet algorithms and what that means to facial recognition. At that point, I think we lose the audience. So there's a question. How do we take seriously the very different forms of arrangements and propositions of these different algorithms and yet still hold onto a sense that these are not technical objects that are removed from us and the world of research and thinking that we are all engaged in? Thank you for your question. I think I was the one who didn't make the differentiation, actually, maybe the critique was directed. Maybe I was the one who didn't make that differentiation. I think tonight the central argument was a call for more opacity, for more obscurity, actually. So I didn't see really the distinction there at play. I was going to say one more thing, because actually I really do appreciate that intervention because I think that so much of the critical social public debate, at least in the UK, maybe it's not the case here in Germany, but has misunderstood what it's talking about and absolutely is proceeding as though we were talking about if and then and that we could fix the if in the end if we think that something in there might be socially not acceptable, extracting the bias or whatever we might consider it to be. So I think there is an important point to be made, I think, about specificity and how much specificity we can do in the social sciences and the humanities. And also my computer science friends insist on specificity and say to me, but isn't that a support vector machine? Isn't that thing you're talking about that? But then when I say to them, well, I'd like you to be specific. When you say bias, I want you to be specific about what that means. So in other words, that somehow something in the vocabularies of computer science you must have specificity, but that actually when Cambridge Analytica are employing psychologists to define the propensity to be influenced, that they are not required to think critically about what influence means, even though we have vast histories of philosophy and social science knowledge that's precisely interrogating what that means. So I would like it to be a little bit more of a flow backwards and forwards, but thank you. Okay, let's go with the flow. There's somebody in the fourth row who's been waiting for a while, please. Thanks for a great talk. I couldn't agree more with you that there's always an ethical political bias and that this bias is productive and that this is what we need to focus on, the space where humans can take decisions, right? If I understood you correctly and I guess I understood you correctly that you have chosen to look at the room for ethics, aperture, and weights after the output because it's there where we want to fight the reductions that the algorithm produces, right? But I want to push you back into the process of making predictions with algorithms. Don't you think that in addition, it would hurt to start with the aperture rather early. It goes into the direction of what we can do to better grasp their results because many parties work on the design, the programming, the data collection, the implementation, and so on, and shouldn't it then help to have different parties who actually work together, speak to each other in terms of how they collaborate and what they even mean by parameters and so on? Plus, I think one of the best things I've heard from a policeman that I spoke to was that when their policemen start, or women, start gathering the data, they actually add stories to where and how they gathered the data so that the people who engage with the data afterwards have at least a little bit of context. And I was surprised about how progressive this actually was. And I think this is the beginning of playing with apertures, if you understand what I mean, already in the production process. Thank you for your question. And I think that you're right. I think perhaps everything that I did talk about this evening was somehow after the output, if we can think of it in those temporal terms. But maybe you're right to push me on the temporality of this, because I guess in some senses it's not a kind of linear temporality, that aperture. Maybe I should be much more specific actually in terms of what I mean by aperture in this context. So I think a resource for us here would be somebody like art historian Jonathan Crary and his incredibly rich histories of perception and of technologies of perception that cross the arts and the sciences. And so in other areas of my work, I think I do get closer to thinking about the aperture in Crary's terms, which is that actually what is this? It's a means of, it's a kind of dividing practice, or it's a means of making these kinds of divisions and differentiations so that, and I haven't done it yet, but I think I probably should, that to me there is an absolutely, there's a close proximity between a convolutional neural network precisely from 2012 that was trained to differentiate a particular fruit in an image to determine whether it's an apple or a pear or a cherry or to determine a leopard next to a Dalmatian. And that actually that specific form of aperture is the condition of possibility for the things that we've become more interested in like facial recognition technologies, but at the very same time that capacity to make those distinctions and differentiations has made possible the kinds of machine learning that means that apparently increasingly now particular organs could be used for transplant that were considered not usable in the past that is having an effect that we might say well there's the ethical good, to increase the number of livers that can be used because the convolutional neural network can detect and draw that relationship in terms of thinking about that which is usable and not usable. Is that making sense? So in other words, that form of recognition that was possible there is then taken into the aperture in terms of people like your police officers. So I'm not sure whether, yeah, I mean I guess the kind of research that really interrogates the human decision is what you're talking about in terms of the police officers. Yeah, that's interesting, thank you. Thanks, please. Yeah, thank you very much for this inspiring and also somehow troubling talk. I'd like to ask a question from an educator's perspective because if we agree that these algorithms and their influence on our lives, they won't go away so this will only increase in the future with the availability of data. So my question will be on the one hand for the educational institutions who train our future data scientists and computer scientists, what would your recommendation be to them in terms of curricular development? And on the other hand, given the fact that basically everyone no matter what kind of job and profession they have in our society will at some point they will need to deal with these kinds of algorithms and make decisions based on the output of the algorithms. What would that mean for the general education? Do you have any ideas or recommendations regarding that? Yes, that's such a good question and such a difficult question to answer in terms of, you know, because that's in a sense back to the trap of like prescription of what this ought to look like. And I suppose that I'm always struck all of the time by those people who challenge my analysis and say, well, you know, but have you looked at, for example, somebody recently at Goldsmiths, you know, young PhD student saying, you know, have you looked at these kind of generative adversarial networks as all kinds of possibilities and potentials there, which are not necessarily about the reduction to one, right? And so then I feel a sense of hope that actually there is a kind of literacy, let's say, or that is cross cutting computer science, philosophy, social science, which maybe we haven't been doing sufficient justice to and that we need to make space for that. I think something that I've tried to avoid doing in my own practice has been to try to kind of find a common vocabulary. So I'm trying to resist the idea of common vocabularies, right? So that I don't want to think about, let's say what bias means in philosophy or what objectivity means in the history of science to mean the same thing as it does in computer science. So actually I find the kind of clashes and the dissensus between those vocabularies exactly the important side of politics. So I know that what a computer scientist means by bias is not what, you know, I mean by bias, but actually that in itself is an important, for me an important political moment because it's about saying well actually how is bias embraced as a productive notion of the future and what other possibilities might there be in thinking of it in those terms? So if it has been a bleak talk then maybe there can be a note of hope there that there are I think resources available to us to imagine these, you know, what I've called the sort of rejected pathways or the traces of the rejected alternative. I mean actually that line, the traces of a rejected alternative comes from a novelist, comes from John Fowles, the novelist who, you know, reflecting on his own writing of novels and saying all of the time as I'm working towards my conclusion I'm aware somehow I'm haunted by what he called the traces of this lost alternative pathway that branched away. And I suppose, you know, and that's what I've tried to do in the book is, you know, use some of those accounts of how novelists imagine their form of writing and make some affinities with the kind of critical computer science communities that I know are doing exactly that kind of thing, you know, trying to think through ways to give some visibility to those lost pathways and branching points. So I'm probably much more hopeful than I sound. But thank you for your question. Thank you so much. We're running already a little bit late and I'm just checking really quick on Twitter. I mentioned the hashtag and let's see if we have anything on Twitter to contribute. The input feed from Twitter. That's right. Okay, there is one question on Twitter. Algorithms are shaped by the data underlying them. So how do you feel that we can effectively limit data available for training in areas that present ethical, dangerous zones? Hmm. Could you just say that one more time? Okay, I can repeat it one more time. So, algorithms are shaped by the data underlying them. So how do you feel that we can effectively limit data available for training in areas that present ethical, dangerous zones? Okay, thank you. I caught all of that then. So algorithms are shaped by the data that underlies them. And I think that with the debates we've got, you know, data-fied, actually I really hate that concept, data-fied or datification, because I am always wondering what that really means in terms of thinking about how these sorts of algorithms are actually about modelling representations of data. So for me, the problem with this question of volume that sometimes gets called big data, though thankfully I think we've stopped using that much, is back to this problem of withdrawal in a way. That actually, okay, if we could limit the data, if we could reduce it, that that somehow would have an effect. And so, you know, when I was talking earlier about how actually it's precisely in some of the areas of our social life where there's actually thought to be insufficient data to generate an effective model, in some ways. I'm speaking off the cuff in terms of my own kind of gut reaction to those things, is that sometimes those things trouble me more, right? So how models are being made on limited data, you know, is also troubling. So I guess the idea of reducing data or limiting the exposure is not what I'm looking for and actually in some sense misunderstands the logics that are at work in pre-populated models, I mean in pre-populated data models, yeah. Okay, you have a question from the audience again? Yeah, thanks for pointing out how these algorithms in their today's existence change the way we think about the world, think about ourselves, think about things. The most dangerous aspect probably being that they create this naive view that everything can be crystallized into a single verdict, say a number between zero and one. Now, history has it that societies, developed societies have dealt with processes, how to handle dynamic processes, ambiguous situations, et cetera. Most of it being crystallized in the judicial system really, where you don't only get the verdict, but at least towards the end before the verdict is given a prosecution and defendant basically have a chance to represent their point of view. Do you think that the computer sciences have an opportunity to learn something from the judicial experiences in that context? Well, so first of all, I'm, yeah, I guess I'm not in the business of identifying an enemy and calling it computer science, you know, and actually much more seriously, the use of these kinds of algorithms in terms of policing and the judiciary, so across the criminal justice system, I still think in a sense we're misunderstanding what's happening here. So in the case of the UK, for example, where some algorithms were being used by, actually in the context of austerity, used in the context of them being insufficient resource, public resource available for frontline officers, and that extends right across from, you know, from social services, not having enough social workers for so-called at-risk families right through to police officers. And so this kind of leaning on the algorithm to try to sort of, as a decision support tool, in the context of some of the police forces in the UK was actually very much about, well, this person who is a lone officer at a desk has got to make a decision, can this person be charged and then released, is there a chance they'll get bail, should they be, you know, detained? And that decision in terms of the algorithm is not being made based on that individual's data, it's being made based on a whole set of attributes of other past examples where people did exactly that. So again, it's back to that kind of entangled sense of the attribute working across those two spaces. So I don't want to be in the business of saying, well, you know, we have computer science here and we have the judiciary here and we have the police here. I think I'm much more interested in how these kinds of techniques are crossing worlds and are learning in one space and then quickly being reapplied elsewhere. So like that team that I was following in 2017 who'd worked on, you know, problem gambling and then detection in MRI images and then detection in video stream images for the military. So yeah, so I think that's what I'm trying to do, not to sort of delineate the different kinds of bodies but to actually work across those different domains. But thank you. Thanks, I think we've got two more questions, the lady and the gentleman in the back and then we should about wrap it up. The drinks are ready, please. Is this on? Yeah. Okay, thank you very much for your presentation. Petra Beulte, PK Bundesamt für Familie und zivilgesellschaftliche Aufgaben. I'm a little bit disturbed from what you said in the beginning of your presentation and the sentence, we know what good looks like. And I was thinking the whole time about the consequences for us also in a political sense. What does that mean? Are we going into the direction of building up a new norm of building up uniformity? You know, what is your uniformity? Wie heißen Sie, Entschuldigung? Entschuldigung? Wie heißen Sie? Müller. Okay, Müller. Uniformity of English. Uniformity. Are we going into this direction or isn't this also in itself already a political statement? Because when I think about the achievements of the last, let's say, ten years according to diversity, according to works anti-discrimination and anti-discrimination and things like that that have been achieved in politics, it seems to me that the whole development of algorithm and the way we are talking about it, is this the right expression? How is the technical blah blah and all these things? Isn't this absolutely counterproductive when we compare this to the achievements that we already are working on every day in political education, for example? And also, I was thinking... Was that the question? I'm sorry, we have to speed it up a little bit. Yeah, I'm sorry. Because when you said you're disturbed by we know what good looks like, I mean, absolutely. That single line distilled out of hundreds of interviews with people who were designing these systems. For me, that single line, we know what good looks like is precisely at the nub of my concern, of my concern about what's happening in that space. Because it's actually not even we know what good looks like and there we're determining a norm. As I explained in relation to this malleable threshold, what might be good here isn't good there. So even if you wanted to interrogate and ask, well, okay, so what is good then? What is that threshold? Where is the normal activity of the city square? How could one meaningfully ask that question? How is the data in this city square being modelled? What are the norms? It couldn't be answered because it's an absolutely fungible, malleable sense of norms, normalities and abnormalities. Just one sentence to finish it? Because I think it is not the question that we are learning to interpret the algorithms, but that we learn to interpret what we are doing in our society that needs algorithms. Thank you. Okay. Gentleman in the back, two rows in the back, please. Last question. Hi, thanks for your talk. So the media equation is a general communication theory that claims that people tend to treat computers and other media as if they were either real people or real places. The effect of this phenomenon on people experiencing this media are often profound, leading them to behave and to respond to these experiences in unexpected ways, most of which they are completely unaware of. Is there a fundamental question here that we're using algorithms and we're using humans as metaphors by which algorithms should work? Because my understanding is that algorithms are sometimes modeled on the brain, the nodes in the brain, the neural network and we've not yet understood the dimensions of the human brain, but in many conversations I hear us talk about algorithms as if they are people. And so my question to you is how do you reconcile that in your field? When you listen to interviews and you speak with the computer scientists, is it a tool or is it human-like? Yeah, okay. All tools are human-like. I mean, I think, you know, seriously, I think sometimes I try to challenge myself in terms of, well, actually, well, okay, what other sorts of modes of calculation are deployed in ways of governing people and populations? Do we have lots of kind of conceptual and important empirical resources to draw on? So people like Ian Hacking and his very careful accounts of histories of statistics and how that kind of twinning of the science of the statistical and the power of the state emerge together. What happened to the idea of how was the average man understood in that context and, you know, of course, the kind of origins of kind of biometry and biometrics and eugenics are absolutely in that intersection of a particular tool or mode of calculation that was dominant in kind of 19th century and I guess what I'm trying to or what I've tried to do with my new book is to say, well, actually, what if we were to say that here we have another mode of arranging propositions about the world and some of it is statistical and some of it statisticians insist it's not but we still need that resource. So what you have, you know, we acknowledge work like, you know, Lorraine Daston's brilliant histories of probability and Theodore Porter and his work. But do we have currently yet, you know, the kind of resource for thinking about as contemporary modes of kind of calculation deployed in various ways to govern population, do we have ways of thinking about how that's changing our notions of norms, anomalies, the good and the bad, you know, the transgressive and the compliant. So some of the time I'm urging that we should suppress the novelty and think about longer histories. You know, as you did then when you talked about the brain as a neural network, you know, and so histories of cybernetics, so people like, you know, I mean, Leopold's brilliant but beautiful data is a history really of cybernetic forms of reason that exactly models the relation between the human brain and the technology in those terms. And she brings it right up to date with thinking about smart cities and what, you know, how the residues of those ways of thinking about the relation between the human and the tool are still present in smart cities. So, yeah, so I think I'm agreeing with you and I'm trying to address some of that temptation which I know I also have sometimes to sort of claim the novelty and instead to say let's think about this as part of a durable history of seeking to govern, not only through enumeration and data and measurement actually, but through representations of self and representation of others. So, you know, even those models of the brain in neuroscience are in a sense model making in a way that we might think are not that different from the model building that's happening in an immigration and border control centre where they're trying to create a new model. Yeah, but thank you. That's a great question to end on. That's a very last question to wrap this up, Louise. I usually I usually post a question about legislation. I've already done that in a Q&A, but let me talk really quick about one of the solutions you propose in your work and it's, oh yeah, you were talking about critique about risky speech about Parisia as in Foucault of running against the grain of talking against the grain of the single output that is to say. Now, risky speech is something that I wonder if you're living in good times to actually go for risky speech, but there are just topic times in many ways, but they're also I think very hopeful in that the discourse that more people are partaking into general discourse, right? The discourse is becoming slowly but steadily more diverse. There's not just people like us sitting up there by skin of colour, by birth, by education and so forth. So this is widening, but it also makes it much more difficult to actually say risky things and you have to watch it because there's more people listening in, there's more people partaking into the discourse. So you have to it's becoming more of a communal thing. How does risk actually interfere with that sort of sociological development I think is good that it is finally starting to happen but that sort of counters to what you propose as a solution of critique. Yeah, well there's a question. I mean I've spent a lot of time thinking about risk but usually in terms of risk itself as a technology and so what I was trying to do with thinking about risky speech or to think about risk in the sense of how one speaks against the grain of the prevailing knowledge which is how Foucault describes it and that really started to happen for me when I was working with some of these oncologists who were using the software to show kind of what would the optimal treatment pathway be when they started to use this language of the flow you know it got me thinking about well actually what would it mean then to place oneself at risk you know and Foucault in that work is writing about notions of expertise and expert authority too I think so so I do think it means that there will need to be voices and there will need to be voices that run against the grain you know and that does mean that that space in the gap that I'm saying exists between the output of a neural network algorithm and something which we would say is a decision that you know it's always taken carrying the burden of risk so the burden of risk that's carried by the decision is that one can never know fully the consequences of what that might mean right and I think that that is about our ethical responsibilities in our relations to others because what do we place at risk I mean well what Judith Butler says that we place at risk is ourselves right that we risk recognizability ourselves and but I find that quite an interesting notion that in risking recognizability because so much of the public debate around public use of algorithms which says well all of the time these are driving precisely to recognize you you know to recognize you as a set of bundle of propensities who are you who what are your attributes and qualities what might you do how do we infer those so to risk recognizability seems to me is something different from that kind of withdrawal or concealment that we were talking about and that's what I'm getting at that the importance and significance of people deciding against the grain and actually reopening the space of the aperture and saying my decision is not the same thing as the output of that neural network yeah thank you so much for that Luisa Moore thank you for traveling down here to Berlin thank you see you in the fall have a nice summer and continue this year thank you