 felly sicrhau yw ydy'r unig yn odda amser, bod nid oedd i'r hadd, felly'n gallu ar y dw i'n gallu arog ar y diogel, i ni'n gallu ei gynch i ddifuits i ddylai ymwylliant i ddifufio'r unig yn enw i'r ddigital. Llywedig i chi'n ddigitwch i chi, sy'n byw'i ddifusio'r ddigital o'r ddulloweth o fy mod i'n ddifusio'r unig yn y cyfnod digidol, y cyfnod digidol yw'r proses yn ymdegwyd o'r ddweud sy'n ddweud bod ymdegwyd o'r ddweud ac ymdegwyd o'r ddweud o'r newydd ymdegwyd ymdegwyd ymdegwyd. Felly, yn ymdegwyd yma, rydyn ni'n dechrau'n gwneud ymdegwyd yn ymdegwyd ymdegwyd, rydyn ni'n ddweud, rydyn ni'n ddweud. Ac rydyn ni'n ddylo'n digidol ychydig yng Nghymru o'r ddwell i ffrindio'r amser dros badafu. Yn mynd o'r ddengymell o'r ddweud o'r credit ymddeg, rydyn ni'n ddweud ymdegwyd, o'r ddweud o'r busgr yn y system yma, o'r cyflym o'r cyflym o'r ddweud o'r cyflym o'r ddweud o'r ddweud ymdegwyd, o'r dangos o'r ddweud o'r ddweud. I want to suggest that increasingly our lives and our life chances are becoming ever more entangled with the adjudications of algorithms. Now, of course, you might say, well, these are very different aspects of our lives, policing, borders and immigration, the health system, and you might wish to say that machine learning is acting ethically in some aspects of our lives and not in others. That might be the direction we might want to go in as a society. So when an oncologist who's specialised in a rare form of head and neck cancer told me that his deep neural networks that he felt he was collaborating with, he said they're making possible vast improvements in detection and treatment. And we might say, well, look, here is the good. Here is the ethical use of machine learning for a responsible society. But what I want to get us to begin to think about this evening is how one might begin to draw that line between the good and the bad or what we think of as the unethical and the ethical in relation to machine learning algorithms. So a team of computer scientists that I followed throughout 2017 had been working precisely on new methods for tumour recognition and for the targeting of particular treatments for specific tumours. We might again say, here is the good use of machine learning. But they had developed their expertise as a team also in the detection of what they called problem gambling. You can see a short extract from my interview with them there. So the online gambling company Betfair had asked them to use machine learning to detect the patterns of online gambling and detect what the anomalies might be. They had also worked for two years on object recognition from the video stream data of drone footage for a major military company. In each case, as they described it to me, they said, the fundamental thing is we know what good looks like. We know what good looks like. And they said, because they'd clustered the data in a way that would show them what normal or good looked like, that then they could detect anomalies. So for them, in a sense, the problem space was the same across all of those different domains of society. They were telling me, we know what a dicted play online or diseased tissue in the MRI scans of the human body or a civilian vehicle through the video lens of the drone looks like. We know what good looks like. So I want to propose to you this evening that this designation of the good and the bad, which so many societies are feeling they have to respond to, you know, how do we embrace forms of machine learning for the good of society? This designation of the good, the bad, the ethical, the unethical, or even human versus machine decisions is not at all a straightforward matter when it comes to our lives with algorithms. So notwithstanding the widespread public claims that the black box of the algorithm should be opened up, that we should make sense of it, that algorithms must be made accountable for their actions, I want to say instead the prime question should not be how should algorithms be arranged for the good of society because their arrangements are changing the paradigm of what good means in society. We know what good looks like. So rather than beginning with that question of how to make them good or normal, I want to say instead, oppose a different question, how are algorithmic arrangements generating ideas of the good, the normal, the transgressive and the risky? So it seems to me that in our contemporary moment when targeting and deciding is taking place increasingly in collaboration with machine learning algorithms, that the reflex response, public response, is often to say, well these are autonomous technologies and they are unaccountable. They are machines, if you like, making decisions beyond the human capacity for scrutiny. In the terms of your themes, we cannot make sense of them, that they are somehow concealed from us. But to draw to a close, I want to suggest that actually the harm done is not primarily the seeding of human control to machine decision. Now the principal harm, I think, is a specific threat to the notion that we live together and we decide, uncertainly in the face of difficult and intractable dilemmas. And that is politics, I think, that is political life. So the claim to secure against uncertain futures with algorithms forecloses other potential futures, even where the neural net itself, as I've described, embodies a teeming multiplicity of pathways that were not taken. So when the algorithm condenses a single actionable output, I would like us always to remember that this output signal lies behind actions like risk scoring at borders or the potential future of a child in relation to social services. Increasingly in the UK this is being used to make differentiations between different levels of at risk children in society. Decisions on detention, on immigration or on the dangers of a gathered protest on a city street. So for me there can be no algorithmic accountability in the enlightenment traditions of transparency or clear-sighted account. That means no way of having a code of ethics that we might say all algorithm developers and designers should sign up to. No opening of the black box. Instead, I think we could demand that algorithms give a necessarily partial account of themselves. It seems to me that this is not a new problem in a sense. The impossibility of giving an account is the precondition of politics of the difficulty of decision. Philosophers and political theorists have been talking about that for a very long time, the impossibility of that clear-sighted account. So in some ways, algorithms don't pose a new problem, but they do expose very vividly a persistent problem of grounding ethics and responsibility in ideas of objective sight and knowledge. As the philosopher Judith Butler reminds us, we did not reach the limits of ethics at the edge of intelligibility where we can no longer make sense. On the contrary, she says, it's at the limits of what can be rendered intelligible or known that ethics becomes most crucial. So in your terms, perhaps ethics begins with sense-making, with imagining different ways of sense-making. So at the heart of my call for a different mode of ethics that in my new book I call a cloud ethics, there are three proposals of a kind and I'm going to conclude by just mapping what they are. So first, I'm proposing we must rethink what ethics means in relation to algorithms so that it is no longer a question of imagining that we stand outside and adjudicate on their behaviour. The philosopher Michel Foucault, in common with many other political theorists, distinguishes different forms of ethics and so on the one hand he talked about the code that determines which acts are permitted and which forbidden. For me that could almost describe this sense that there's a Silicon Valley problem and if we could just delineate what could be forbidden and what could be permitted that we would make some kind of progress. And I think that his sense that this is a limited form of ethics perhaps comes from his own work on the governing of norms in relation to sexuality. So he distinguishes that from a different kind of ethics. He describes as the inescapably political formation of the relation of oneself to oneself and to others. So the inescapably political formation. And this distinction I think could be crucial because which acts are permitted and which are forbidden I think is a sheltered form of ethics in which we will be continually trying to adjudicate on when algorithms step over the line. Perhaps we might talk about that some more in questions. But actually what are they doing in terms of what I've described tonight? They're functioning precisely through a reorientation of selves to selves and others through this inescapably political formation. And I think that's what I want to try to work with this sense that algorithms mean that we are still struggling with this sense of a political formation and what might it mean in the context of machine learning. So then second I would like to think that we could reconsider the output and what it means in our world. That we might be able to reflect on it differently so that the output of the algorithm is never understood as determining a decision. And so I would like us to think instead of thinking of outputs I would like us to think about something like an aperture. So for those of us who are working on notions of aperture from the arts or photography the aperture is always both a closure, a reduction in the closure but also an opening. So it might be able to open out and think about what were the other alternative ways of reasoning that might have been present before that closing down took place. We should make some trouble I think at the aperture as the feminist Donna Haraway suggests we should stay with the trouble and as she describes it, follow the threads in the dark. So even in the face of reduced outputs I think we could consider the traces of rejected alternatives. And for me this has been a kind of thought experiment I've tried to exercise it in relation to facial recognition biometrics for example. So you might know that in April a brown university student in the US who has Sri Lankan parents Amara Majid was identified, misidentified by the Sri Lankan authorities as having a link to these Sri Lankan bombings. And the apology that was issued by the Sri Lankan authorities said the facial recognition system misidentified her by which point of course she had death threats in her inbox. Now I think if we see only that the output of the algorithm as being a mistake or misrecognition we might then lose sight of what I think could have been going on in the aperture which is that when she was a teenager she wrote an open letter to Donald Trump expressing her concern about the targeting of Muslims. She also has a project called the Hijab Project. There are multiple other forms and lines of narrative and story that make that designation of her as a risky person not a mistake or an error at all. Does that make sense? So if we think with the aperture we don't say it's just made an error, a mistake, it can be fixed, we can modify. Instead we say actually what were those other potential links and correlations that it was working with. So that would mean if we were to go with this thought experiment that every time we're confronted with something like an algorithm that says here is an optimised output, this is optimisation, that our first thought would be yes and what were the rejected alternatives, what other forms of connection and being together that are not already explained might be present? How could the output have been otherwise? What are the bifocated pathways that continue to run beneath the surface of an optimised solution? And finally, the weights. I wish to make the weights in the deep learning algorithms a lot heavier and more burdensome. And a few generous computer science friends of mine have urged me not to pursue this and have said that this is not something that we should do. So they have described it to me and said, look the adjustment of weights is an impenetrable process that retains its opacity even to those who are undertaking it. So one of them said to me, you cannot make the weights political Louise because they're not really a thing. We don't know how they work, we are just messing around with them. But it's exactly, I think they didn't realise at the time, I think how this was music to my ears that it was something that was opaque and that they were messing around with them because it's exactly this kind of opaque, messy and embodied experimental relation to the algorithm and its data that interests me. As Butler says, a certain opacity persists. So when the judge, the oncologist, the clinician or the border guard decides with algorithms, I think they also necessarily don't know how they work. They are just messing around with them. So some of the most fundamental political and crucial decisions of our times are being made, I think through this modified and fungible notion of what can come to matter. And so there it is, I think, our lives with algorithms, the inescapably political formation of relations to ourselves and to others. Thank you.