 Now, to properly welcome you to what is the very last research seminar of Spring 2023, and it really feels like that, because it's still light out at 5 p.m., so it's quite nice. So thank you all for coming, and for all of you who are joining online as well, and I know there's a big contingent joining us online. Today's event is a part of a series called In Conversation, New Directions in Art History. The aim of the series really was to bring new research and ideas into conversation and circulation, but also to think through the approaches and methods that we build along the way, how we write into disciplinary and thought-provoking art histories. You've already heard from a wide range of speakers. I know I see some familiar faces and some new faces as well. And today's event explores images, ways of seeing, and artificial intelligence. It is titled Neocolonial Visions, Artificial Intelligence and Epistemic Violence. So artificial intelligence or AI, often presented as an objective view from nowhere, constitutes a regime of power that further establishes historical forms of bias and evolving models of subjugation. Today's speakers, Anthony Downey and Maya and Ira Ganesh, suggest that a key component in this process involves the extraction of data from digital images in order to train AI. How do we understand the transformation of images from their symbolic and representational context to their contemporary function as sources of digital data? The format of the evening will be similar to the others that you've already experienced. We'll have about 15 to 20 minutes from each speaker. And then for about 10 to 15 minutes, the speakers will discuss each other's work, think through their talks as well as think about methodology and how we do interdisciplinary histories, art histories as well. And then we have time for Q&A from the audience, both in-person and online audiences, so feel free to put your hand up and also put questions in the Q&A section of the chat. That will last for about 25 minutes and it will be followed for in-person audience members by a reception next door at the center. So just to introduce the speakers today, we have Anthony Downey, who is Professor of Visual Culture in the Middle East and North Africa at Birmingham City University. Downey is the cultural and commissioning lead on a four-year multidisciplinary AHRC Network Plus Award, where his research focuses on cultural practices, digital methods and educational provision for children with disabilities in Lebanon, the occupied Palestinian territories in Jordan, and the project runs between 2020 and 2024. Recent and upcoming publications include Algorithmic Anxieties and Post-Digital Futures, forthcoming with MIT Press, and I think we'll hear some things from this book today. Topologies of Air, Shauna Illingworth, as well as various other publications, including Heberwai Amin, the general stalk from Sternberg Press. His curated shows of Heberwai Amin's work at the Mosaic Rooms in London and the Silberman Gallery in Berlin. With Anthony, we'll have Dr. Maya Indira Ganesh, who is a cultural scientist, researcher and writer, working on the social and cultural politics of AI, autonomous and machine learning systems. She is a senior researcher at the Leverhulme Center for the Future of Intelligence, and an assistant professor co-teaching a master's program on AI ethics and society at the University of Cambridge in the UK. Ganesh earned her PhD in cultural sciences from Leuphania University, Lüneburg, and her work examined the reshaping of ethical through the driverless car and apparatus of automation and automobility, big data, cultural imaginaries of robots and practices of statistical inference. Before returning to academia, Maya spent a decade as a feminist activist working at the intersection of gender justice, digital security and digital freedoms of expression. Her work has consistently brought questions of power, justice and inequality to those of the body, the digital and knowledge making. Without further ado, over to Anthony. Thank you for the invite. It's great to be here, and thanks to Ella for organizing and putting up with us. I should say it's a great pleasure to be here with Maya. Maya and I have worked together on a number of projects, and we are working on a number of upcoming projects, but it's always a great opportunity to have these conversations in public. We have a lot of these conversations in private, but having them in public is a much better way of actually articulating them. So on that note, I'm going to talk about a very short section of a chapter of a book that I'm writing, and as my father's fond of saying, nothing comes of nothing, there's always something behind that, and there's a number of projects that I've been working on over the years which I think more or less plot the trajectory of the talk I'm about to give. One of those books is with Heba, mentioned by Shreya. Heba is an old pal of Maya and myself. We've all worked together very closely. This book, for want of a better way of putting it, and perhaps being a tad reductive, is about data extraction and surveillance, specifically in the context of Egypt and the Middle East. So I want you to just hold this notion of data extraction, because I think this is one of the key elements. And I think just to follow up on that, within this book, I argued that the colonial moment, and again I'm being a tad reductive, was about the extraction of wealth. The neocolonial moment, whilst furthering that ambition, is about the extraction of data. Now again, that's something which I will talk about, and the basis of the book I'm working on is very firmly imbricated within that context. Another book that Shreya mentioned, which Shona Ellingworth, topologies of error, which I played a small part in, and as much as I edited and conceptualized the book itself, but there are many, many, many other people involved. This book is very much about aerial threat. The aerial threat embodied in lethal autonomous weapons systems and unmanned aerial vehicles. My general input into this was to look at how that's being algorithmically rationalized, how AI is playing a role in lethal autonomous weapons and unmanned aerial vehicles. So again, this is very much about something which I want to extrapolate here this evening, but it's also core to the book itself, and some other material around that. The third element is something I'm working on presently, and that's a book with Trevor Paglen, whom I'm sure you guys know, or at least know vaguely of his work. One of the key things with Trevor, and this book in particular, and in fact I just got the cover last week, so I will pop the cover up, is that algorithms as we understand them, machine learning, artificial neural networks, convoluted neural networks, generative adversarial networks, they all have a systemic brittleness, and that brittleness is often manifested in hallucination or hallucinatory behavior. Algorithms, despite this notion that they're somehow abstracted or the view from nowhere or scientifically proven, are given to pathologies, and those pathologies can be exemplified quite often in the will and summoning forth of realities that do not exist. I'm sure you remember a few weeks ago, on February 8th, when Google launched BARD, and BARD made this huge mistake. It hallucinated an answer into being. In that split second, 100 billion was knocked off the share price of Google, which might bring pleasure to most, but in that moment as well it showed and exemplified the systemic problematic of AI. And that systemic problematic is that it is brittle. It is probabilistic and it is statistically oriented. It is not deterministic. And this is something Maya and I will come back with. Come back to. So, I want to take those three ideas for a brief walk, and I want to talk through a case study. And I'm using the term case study advisedly here, because again, if we're going to talk about data extraction, if we're going to talk about the way in which the colonial moment is about extracting wealth, we're going to talk about the neocolonial moment as the extraction of data, then I think we need to address how we address those issues specifically in the context of the Middle East. So the term case study is even problematic here when we consider the notion of epistemic violence. How to tax anomies? How do categorizations fix, set, and extract information from the Middle East? But this particular case study is quite imminent in as much as it happened quite recently. It's part of an ongoing investigation into the death of Zamari Akhmadi, his brother-in-law, his brother, and seven children, all members of the same family. This was the last known drone strike of the so-called war in Afghanistan. It more or less matched with the first known drone strike, which happened more or less two to week, 20 years earlier. This drone strike on August 29, 2021, was commissioned by the US Army. Now, this particular drone strike has now been the subject of a New York Times investigation and many other investigations into what precisely happened. And I've been doing my own research into what precisely happened. What we do know is that any drone strike, such as that drone strike in Zamari Akhmadi and his family, would have been a signature drone strike, i.e. they would not have known the person in advance. That's in opposition to a decapitation drone strike, where the individual is known in advance. In this instance, data would have been gathered. Part of life analysis would have been made. This would have been made by an aerial, most likely reaper drone, MQ-9. There would have been biometric data analysis on the ground and human intelligence. And I want you to hold this notion of human intelligence, because I think this is quite important in the broader context. This was an over-the-horizon strike, an OTH, ordered and more or less managed from El Udeid air base in Qatar. That air base would have been working in conjunction with Creech air base in Nevada. So effectively, all of this death and destruction happens remotely. So you have a scenario, a situation, a historical event, where seven children, three adults, are killed. Following that moment, you have the US military stepping up, Mark Milley. Some of you might recognize this chap. He was infamously the chap who walked out with Donald Trump when Trump decided to have that little march in Washington. Milley came out and said it was a righteous strike, based upon the evidence that he had, that Samaria Khmadi was an ISIS affiliate about to launch a terrorist strike in Kabul on August 29, 2021. Very quickly it transpires that, on the contrary, he was not a ISIS affiliate. In fact, he was a health worker, forward slash aid worker, distributing food and other materials to various people across Kabul, hence him driving a white Toyota Corolla car and zigzagging across Kabul, delivering aid. Now, the MQ9 Reaper drone, who was surveilling this for a pattern of life analysis, immediately saw that particular pattern of life as being indicative of a potential terrorist, either scoping out a particular target, or indeed gathering further information for an imminent terrorist attack. Then it transpires that after the event and how the US military quantified this egregious mistake, which result, as I said, in the death of 10 people, was they put it down to human error. Now, again, I think this is important, the human intelligence side of it. And if you trace through the actual text, and I think textual analysis is quite helpful here, you see again and again and again that the error is given over to human error. Now, I think this is problematic because we can always blame human error. In this instance, however, I want to argue that this is not about human error. It's about systemic, systematic error that is inherent within the algorithmically rationalized data that drives lethal autonomous weapons and unmanned aerial vehicles. And again, I want to go back and just break this down a little bit further. A pattern of life analysis, it's gathered information, it's data, it's extracted. It then sets up, and this is a generic image, this is not an image related to the attack on August the 29th, 2021. But that data is then algorithmically rationalized. It's produced as numeric code or binary code that can be read as part of a data set that's fed into an algorithmic system to train it to recognize certain patterns of behavior and, perhaps most critically, predict what certain patterns of behavior will be in the future. Combined with biometric analysis, you can imagine you're setting up basically an argument for the prosecution, which is not being set up by humans on the ground or humans on the loop. This is all being algorithmically rationalized through systems over which even the engineers have very little capacity for understanding or controlling. How does that actually work, though? Now, this brings us perhaps to the core of some of the issues around digital images. What data is being extracted from digital images and how is that data being used? Because when you extract data from a digital image, it's not image data, it's not symbolic, it's not allegorical, it's numeric, it's binary. That particular data can then be used to train algorithms. But where do you start with that? If you look at an MQ9 multispectral sensor, for example, this is a system that does not just extract data. And what we know is that, of course, it does extract data. And I'll just point out to you how, more or less, it does that. There are three specific systems within it. And again, this is a generic diagrammatic portrayal of what that system does. But this system is not just taking images and extracting images. It's also rationalizing what those images are. It's also, at a certain point, making decisions about what it is seeing. And those decisions are often being made independent of a human in the loop. So what you have here is not just a system of extraction, you have an over-determined system that is prescribing or summoning forth what it thinks those images are. How much do we know about this? Fortunately, we now know a hell of a lot more about how this operates. And we know it because of Project Maven. I'm sure many in this room remember Project Maven. This hit the headlines in 2017 when the US Army, the Pentagon came out and said, right, we're now going to use algorithms formally in order to isolate, highlight, and identify future targets. What they didn't say at the time in 2017 is that they would be working with a private company, namely Google. Now, fortunately, because Google is like a big sieve and everything leaks, we know a lot about what was actually happening to the data, the images that were being extracted and how they were being utilized. Those images were delivered to Google, video images, still images, in a so-called cold environment, i.e., they were delivered on a database, they were not connected to the internet. Google used TensorFlow, and I just want to highlight, TensorFlow is an artificial neural network, and we're going to come back to that. Google used TensorFlow to isolate 38 objects in that video footage. We don't know what the 38 objects are, but we know that Google engineers were working through Project Maven to support the US Army to highlight, isolate, and potentially predict targets. That system in use was TensorFlow, which is an artificial neural network. I just want to hold that for a moment. Very briefly, before we go any further, Project Maven, despite the fact that it all hit the headlines and it ended Google's relationship with the US military, ended at that point. Project Maven did not end. Project Maven was then taken up by the unholy alliance of Peter Thiel, CEO of Palantir, I don't know if you're familiar with this company, a very interesting company, for many, many, many different reasons. Eric Schmidt, former CEO of Google, close ties to the Pentagon, and James Murdoch. A kind of, it's almost like the three horsemen of the apocalypse. All you need to do is throw Kissinger in there and you've got the four horsemen of the apocalypse. And interestingly, Henry Kissinger and Eric Schmidt have published a book on AI and the future of humankind. And again, there's a confluence of interests here, which is not just about the military industrial complex, it's also about the education forward slash entertainment complex. Again, something we could discuss further. But let's go back super briefly. Images extracted, data extracted from images, data used in training sets to train algorithms to identify objects of threat, but not only identify them, also potentially predict when those objects of threat would come into play. So you have here something very important. You have the primary function of algorithms, the primary function of AI is to predict the future, but you also potentially have something else coming into play and that's the military imperative of the preemptive strike. The idea that preemption is better than containment, that it's better to eradicate a threat before it becomes a threat. Now again, I wanna come back to this because there is a collusion here between the systemic process of algorithmic prediction and the military imperative towards preemption. And that is again a martial logic that has long defined kinetic and non-kinetic warfare. But before we go there, what happens to these images specifically? Now this is a very generic graph, but it does give some idea of what's actually happening in an artificial neural network or a convoluted neural network, such as those that are used by TensorFlow. You have an image, which is again, it can be presented as a vector or a raster or a numeric code, which is extruded through a convoluted neural network to produce a classification or a prediction of what that image is. Somewhere in the middle of this convoluted neural network, something happens. And largely what happens happens in a black box system. Maya's gonna come back and talk about this because those black boxes are proprietary, someone owns them, hugely problematic because even the term black box would dissuade you from investigating further precisely what's happening to that data within a convolutional neural network or an artificial neural network. I would argue, and I think Maya and I are of the same mind that we need to deconstruct precisely what's happening in that black box. Because although this is a military industrial proprietary issue, we need to know what's happening. What I do know, however, is that these systems originate from a number of other imperatives, one of which is Frank Rosenblatt's perceptron, whom I'm sure the gentleman nodding in the foreground here, I'm sure you're familiar with the system. Ah, fabulous. I would like to hear more about that. This guy is more or less the father of machine vision, computer vision, and he is the background to what I'm talking about in as much as the perceptron is the first artificial neural network. It's the first system whereby machines, machine vision can see and predict what it is seeing. The initial project unveiled in 1958 involved an IBM 740 computer, fed punch cards into a camera system, whereby on one side of the punch card you would have an image, and on the other side you would have an image. After 50 goes, the machine was able to 90% correctly predict what they were looking at. This is the origins of machine vision as we know it. Of course, you could go back to 1943 in the publication of Walter McCullough and Walter Pitts and Wayne McCullough's paper towards a calculus of neurological systems, published in 1943, which is the beginning of machine vision. And this is precisely the system that Frank Rosenblatt takes on board in 1958 to produce the perceptron, which is the grandfather of all of these systems we're talking about now in relation to artificial neural networks and computer vision. The point I will make briefly is that this system is not deterministic and I want to emphasize this. It cannot predict the future. Despite all of the conversation around AI and prediction, AI cannot predict the future. No one, as far as I know, can predict the future. No machine can predict the future. And this fallacy of induction, and we can come back and talk about this, is absolutely inherent to AI that somehow it can predict the future. But if you go back and look at the very origins of the perceptron machine vision, artificial neural networks, you will find two things. Rosenblatt, quite obviously, was dealing with statistical separability and probabilistic models. What does that mean? It means that regardless of how much data you have extracted, no matter how many convolutions you've put it through, no matter how many generative artificial neural networks you use, you will always have a probabilistic prediction of what will happen in the future. So these legal autonomous weapon systems, your unmanned aerial vehicles, their algorithmic rationalization of data to predict future threat, can only ever be probabilistic. It can only ever be something which they think might happen, but it is not deterministic. It cannot determine threat. Why does that matter? Because these systems, probabilistic statistical systems, are given to hallucinating realities into being based upon their systemic and systematic systems of image and data analysis. Now I picked one particular example here, which I think goes right to the core of it. This is a convoluted neural network that consistently, 99.9% of the time, identified a 3D turtle as a weapon. Now again, when you put that system into an AI system that powers a legal autonomous weapon that's seeing something relatively innocuous as a weapon, you have the potentiality for a fatality. And again, there are numerous examples of this and I'll just very briefly point out that there are also high confidence fooling images whereby artificial neural networks, convolutional neural networks, can be fooled into thinking or can think they are seeing a particular object or item, whereas in reality, they're not seeing anything other than a pattern of noise. But this again is the hallucinatory imperative of AI, hallucinates, it has pathologies of hallucination. This is very interesting too. One of the reasons given for the fact that the United States and other governments have not fully sanctioned fully autonomous weapons is that those weapons have the capacity to turn on themselves and indeed you, the operator. And why? Because quite often you can hide within relatively innocuous images, other images, which it thinks it's seeing. So Paul Schauer argues in this wonderful book called Army of None, Autonomous Weapons and a Future War that because of this inherent systemic brittleness, legal autonomous weapons, fully autonomous weapons have never been used yet. What I do know is that they have been used and again we can come back and have a quick look at that. They've been used in Libya, which I think is interesting again, a sort of testing ground for these technologies. So I'm gonna wrap it up. Wrote an infamous statement by George W. Bush about a year before the invasion of Iraq, which is now 20 years ago, March 20th, 2003. If we wait for threats to fully materialize, we will have waited too long. It's an extraordinary thing to say. A preemptive strike or the martial logic of a preemptive strike does not need a verified threat. What it needs is the indication of a potential threat. So think about that, that martial logic, that martial imperative towards preemption. Is that, and I would argue it is obviously, in collusion with that hallucinatory predictive imperative that we find absolutely systemic and embedded and implicated within the logic of AI as a system. Put that in conjunction with the following statement, General Michael Hayden, former director of NSA and the CIA in 2014, we kill people based on metadata. So, quite literally, you do not have to represent a threat as such, but if a lethal autonomous weapon system powered by AI predicts you are a threat and the metadata matches and the human intelligence matches and the pattern of life analysis matches and the data from a biometric analysis matches, you are then a de facto threat, just as Zemari Akhmadi was, rather unfortunately, on August the 29th, 2021. I want to say to end this great quote from Derrida, modern technology contrary to appearances, although the scientific increases 10 fold the power of ghosts, the future belongs to ghosts. Now again, thinking about this phantasmagorical space that's being set up by AI, this space of imagination, fantastic imagination. I don't think it's too much of a shift to think back to the phantasmagorical space of colonization itself, the phantasms that were produced through colonial discourse. Now of course, there's a danger here that we might abstract this from reality because there is a clear reality to that and there is a clear reality to the systemic brittleness of AI and the preemptive logic of a martial strike and that logic and that reality can be of course fatal. And I just want to end by pointing out one further thing. Quite often then a lot of the evidentiary work that's been done around lethal autonomous weapons systems and unmanned aerial vehicles is very important. It needs to be evidenced. How many people have been killed? Who do we know that's being killed? But it might distract us from an even more profound point. These systems are not about controlling the past. They're not even about controlling the present. They're about occupying the future. They're about prediction. They're about co-option and occupation of the future imaginary of entire regions such as the Middle East. And that occupation is to do with trauma. People who are subjected to forms of hyper-surveillance over extended periods of time develop evolving forms of trauma. And I think this is where we need to place a lot of the research that we're doing at the moment. What forms of trauma are directly located in the knowledge that in any given moment you can be eradicated, literally, metaphorically, allegorically from the face of the earth. And that trauma is evolving as we speak. And again, just thinking about some of the people that I've worked on various projects, thinking about how they think about these technologies because they don't think of them as abstractions. They don't necessarily think of, you know, the AI abstraction of them. They think about the reality of them and what that reality means literally on the ground. So, thank you. Okay. Thank you. Hi, everyone. Thank you so much for this invitation, Sria, and to folks at the Paul Mellon Center for helping organize it. And to Anthony for the dialogue. Kind of tough to follow that because I think my work is situated in a slightly different space. So, and I think this is the interdisciplinary part of the work that we do. How do we kind of talk across these different frames, geographies, time scales? But also I think the way I understand what we're trying to do here in this moment, but also I think generally with this work is start trying to have kind of conversations through theory and through methodology. And my slides are going to be much more wordy and this might be a little bit performative. So, bear with me. We'll see how it goes. So, what I wanna talk about is actually what does it mean to live and die in this kind of machinic culture that we live in? And within the context of life that is shaped by statistics, what kinds of politics and sociality do we orient ourselves to in this time? And when I say in this time, I refer to this as something that I like to call the algorithmic sublime. And when I say algorithmic sublime, I'm directly referring to Bruce Robbins, a scholar who writes about the sweatshop sublime. And some of you might be familiar with this idea. So the sweatshop sublime is this understanding that you could be standing at this window looking out onto this beautiful scene, drinking your cup of fair trade coffee and knowing exactly where your fair trade coffee comes from. We have all the data that we need to be able to say that this was ethically sourced from this part of the world. I know that my H&M shirt, as nice as it is, comes from maybe a sweatshop that I'm not so crazy about, but I'm able to see these realities. And it's sublime because we have the capacity now to see and know these things. We have so much vision into the systems that we're part of, but we're helpless. We can't really do much because we feel trapped within the systems. And it's like our patterns of life have become sort of consumed and subsumed within these networks that enable us to stand at this window. And drink coffee and look out onto this beautiful scene. So that's the sweatshop sublime. And I think that most people will recognize and understand that feeling of connection, but also the affect that comes with it, of the helplessness and the wonder. And what does that mean in the context of the algorithmic sublime? It's similar. It's this idea that, you know, we're beyond the privacy paradox. We know that we're standing here, we're emitting data. There's all of these great metaphors for data. We're leaking it. We're extuding it from everywhere. Every single thing that we do is being mapped and tracked. We recognize that we're being managed, coerced and nudged and profiled and targeted. Even though we may not be able to see all of it, we know this. And yet we have to continue within this because life has become such that it's impossible to be without one of these things. But we all try. You know, there are times when we think, I'm going to obfuscate this. I'm going to throw a wrench into this and I'm going to step out of the, I'm going to step out of this data matrix. And I'm going to lean into the algorithm. I'm going to fool it. There are lots of tactics and personally I've been part of work that, you know, kind of is very much part of resisting datafication. But I'm going to suggest today that I think there's, I think we need to update some of those approaches and some of the ways in which we work with those technologies and systems. And I think I also diverge from Anthony in thinking about some of the frames. Well, not Anthony, you know, literally not you, but I think some of the imaginations that maybe are a hundred years old. It's interesting, even when you're saying the Middle East, you know, that's an imagination and that's a creation that's created by coloniality itself. So I think that where we're moving to now, and this is the kind of moment that I want to sort of inhabit and think about these situations and contexts shaped by data science statistics and related approaches. And this is kind of the question that I think is orienting some of the things I'm interested in pursuing now, but also in the future. So the kinds of epistemic order that is created by life under context of data science and statistics, what kinds of critical practice and interventions constitute approaches to justice, ethics, accountability, and politics? I'm going to go out on a limb here. I'm not the only one and say that the epistemic operations of probabilistic statistics, frequent statistics, neural networks, matrices, algebra, stochastic forms of maths shaping our world, these are not completely contained and understood by the systems of justice and law and accountability that we have today because these are as much of our material, as much as they are material, these are non-human and immaterial systems that we are confronting and that we're dealing with. So, sorry, lots of words. I come to this thinking through some work that I did over a number of years trying to map out the material-discursive epistemic infrastructures that propped up the impossibility or the impossibility of an ethical machine. So I was studying this idea of ethics in the driverless car. Some of you may remember about half a decade ago it was very much in the news. You might have heard about the trolley problem, you might have heard about very sensationalist, racy headlines about which way should the driverless car go if its brakes fail and it's in the speculative crash, which kind of human should it kill? The idea that computers could be programmed to make moral decisions as much as the way that humans do and to program that into a future imaginary of an autonomous vehicle that doesn't exist I think this is actually a very old school way of thinking about philosophy, thinking about reasoning, very situated in the human itself. So the speculative, sorry, these thought experiments like the trolley problem are very much about human intuition and human ways of reason. But the thing that I think we're confronting is that in the context of making a driverless car, we're not actually dealing with a car anymore, we're not even dealing with cities, we're dealing with data infrastructures and we're dealing with large scale statistical operations. So what I found the interesting shift that's happened in the ethics of autonomous driving or the ethics of these systems broadly is that we've moved to this mode of thinking about ethics as decision making that will occur within locally produced up to date iterative algorithmic models of the world. These are models that are highly responsive to feedback and the environment. So you can't really have rigid rules or guidelines but you can only have risk analyses or game theoretical modelings of what should be done, what is the best thing to do that will accrue to the maximum social good. In other words, these are statistical operations occurring inside AI ML apparatuses being understood now as decision making, moral decision making. And I can talk about this more at length later. And we see applications of this, not just in driverless cars but it's happening in lots of other aspects of life. You might have heard about things like effective altruism that's been in the academic news recently. These kinds of thinking about the maximum social good is very much something that's driven by modeling and statistical thinking. And I didn't know that Anthony was gonna talk about hallucination but I suppose everybody's talking about hallucination now because of chat GPT. So this is one example of this kind of mode of life that we inhabit now. So I just kind of asked chat GPT to tell me about what are some works by Maya Indira Ganesh and most many academics are doing this now and are being very amused by this because I did not write any of these books but I wish that I had. Also like I discovered this great poet called Amaranth Borsuk, what a great name. They write like just some amazing poetry and I'm definitely going to get in touch with them about it. Now what's actually really interesting about this list if you can read it. So it says cyber feminism in the 21st century, the Amaranth Borsuk reader, the augmented reality of Pokemon Go, cultures of internet, virtual space, Israel histories, ethnography and the internet, data colonialism, indigenous resistance. The thing is I kind of work on all of those things but I have never written any of those books. And this is because the hallucinations are actually, hallucination literally is a condition of an AI model confidently and I love the use of this word confidently, confidently exceeding their data sets and giving you responses that are incorrect. And this error tells us something about the way that the system operates because what chat GPT is doing, like many of these things are doing is they're predicting what word is most likely to occur next. So if you say twinkle twinkle to chat GPT, it will say little star because that's what exists in the data set and that's what's most likely to come after it. So chat GPT is just looking for the next most likely probable word to come next. And so it's not actually that off in this but this is completely made up. Sometimes I look at it and I think, oh God is it kind to tell me the direction I should go in? Is it setting up sort of ambitions for me? But it is literally predicting, isn't it? Yeah, I do want to work with Amaranth Borsak. This is something I actually saw today and many of you may have seen this on Reddit, mid journey. This is another hallucination of what would selfies look like by warriors and soldiers from the past? And what would their smiles be like? How would they smile at the camera the thing is indigenous people in North America were not smiling if you know what pictures well I know what conditions are like at that time they're not smiling at the camera but the thing is the American smile is actually a really popular thing in the database because Americans really good at smiling at the camera and go look at this thread. It's actually incredible. The kinds of images that are there. So, and of course this is not chat. This is from mid journey. So, the reason I find this interesting is because it relates to this shift from this Western industrial idea that's about a couple of hundred years now from the age of mechanisms and mechanistic thinking and mechanistic apparatuses of clocks and categories and hierarchies, racialization, things that have come from colonialism to ideas of circularity, organicity, cybernetics and feedback. Now none of these ideas are kind of new or necessarily separate. They've all been sort of like coexisting but I think that we're coming back to it now because of the sort of condition of life of algorithmic life that we're living in now and happy to talk about it more in detail. So, I wanna for the last bit of my talk I wanna talk about this idea of recursivity and contingency and here I'm drawing directly from the philosopher of technology Yukui. I'm also influenced very much by the political geographer Louise Amour. So, I want you to humor me a little bit. People in the room and people online and I want you to do a little meditation with me, a little guided meditation. If this makes you uncomfortable just look down at your phone. Ideally not looking at its screen just turn it over and look at the back of it, okay? But if you so wish, what you can do is you can actually imagine yourself, your devices, all of these things giving data out and being shaped by data that's out there, that's swirling out there and from where your shirt comes from to what's in your cup of coffee to all of the tabs that are open on your phone and your computer, there are all kinds of traces going out. Now these traces are being picked up and they're being traded and they're moving at speeds that we can never comprehend. They're resulting in money, they're resulting in more data and information and now all of that is actually coming back to you as well and it's shaping you, it's nudging you and take a moment to actually think about that feeling of being part of this incredible system of communication, of data going out and coming back but there are also those moments when you think about you take a different path to work, you shut your phone off, you think about introducing, throwing a wrench into the system and doing something you wouldn't of trying to disrupt and obfuscate the trace. The thing is, I don't think the system actually changes. I think that what's happening and this is what Yukui tells us that there isn't kind of linear causality, there's actually a process of recursion and recursivity, who he says, is not mere mechanical repetition, it is characterized by the looping movement of returning to itself in order to determine itself. So the system we're in is actually coming back to us but it's not the same system and the reason I'm saying things like guided meditation is because I think this stuff is hard to talk about, it's hard to understand and it exists in the realm of our imaginations and it exists in our metaphors because it does exceed human understanding at this point. So what's happening is it's not a perfect loop. The data for this in this context, the data comes back but it's also being transformed into something else by the ways in which you disrupted, by just changes that happen. Now these conditions keep get, the conditions of change keep getting absorbed back into the system and the system can't break, the loop has to continue. So these extraneous events, like something that happens to the driverless car or a different kind of pattern actually is a contingency that gets reabsorbed into the system, so it's never still the same system but it also doesn't break the loop. So I'm gonna say here that recursivity and contingency describe two basic features of every theory of systems. As a system is in contact with the outside, the events occurring will be perceived by the system and possibly but not necessarily change its behavior. The system acts recursively on itself, whether it is a machine program to achieve a certain function or a living organism to pursue goals that emerge in line with its contact with the environment. But for a systems model to be more than a deterministic construct of linear causality, and this is where we can have an interesting conversation about determinism, there needs to be indeterminacy regarding its interaction with the environment. The concept of contingency serves this purpose of enforcing ever-new recursive loops and adaptive strategies. So it is not a perfect circular loop but perhaps something more spiraling and coiling and this is the scholar David Beer's description of it. So this recursion and contingency that Yukui talks about is this break from this mechanistic way of thinking of this linear causality. And this linear causality I think is part of the epistemologies and infrastructures of the 19th and 20th century technologies that have built contemporary society. Even the kind of gendered and racialized categories that I think we've been talking about in the context of bias, I think what's happening within these systems is new kinds of subjectivities being produced. And I want to come back to end with Louisa Moore who talks a lot about the way that machine learning has developed over the last decade. And this is of course the work of Jeffrey Hinton that she talks about that what's happening within algorithmic and machine learning models is an output of a representation of the world. So this recursion and contingency that's happening is also just outputting a representation based on what it has learned. It is not the world, it is a model of the world that exists at that moment in time. So what happens to the driverless car coming back to the driverless car? You don't need to know an ethics. You cannot have a fixed ethics of what a driverless car should do or what any autonomous technology should do because what is ethical will be decided in that moment by most likely local conditions and a set of factors and data that exist within the models that are in operation at the time. So I want to kind of maybe end on a sort of somber note with saying that things like our understandings of accountability and justice and rights and ethics really need to be, I think, updated and rethought for this moment of life under statistics and in the context of the algorithmic sublime. And with that, I'm going to end and say thank you very much for your attention and for putting up with my meditation as well. Thanks. I think, Lion, I had a couple of questions. Do you want to go first or you want to hop in? Sure. I would say that the way that I think we're thinking about accountability and things like justice and ethics might be somewhat different. And I think maybe our work is about trying to figure out what that means. So could you think about a little bit about how do you understand in the current moment that we live in, how would you understand accountability in the context that we have it? Yeah, it's a difficult one. When you were talking, there's a wonderful quote. I'm sure many people here know Yone Shmekas, fantastic filmmaker, I know Andrew would know of his work. He said something in 2019, I think it was about six months before his death. He says, technology is always ahead of humanity and ethics. And it's an interesting quote from someone who was so invested in technology himself as a filmmaker. Because I think what he was pointing out there is that technology itself will always exceed our legal, ethical, moral, historical, social, economic, ability to not just curtail but fully understand what's being unleashed. Now that's long been a metaphor, the Promethean moment when fire was given over to man and suddenly they're all sitting around going, okay, this is great but we can burn cities with it or we can light fires and keep ourselves warm, but it can go right away. So this notion of accountability for what is unleashed in this moment which is an algorithmic moment, it's an age, it's a post-digital age of algorithmic rationalization. How we react and are made accountable for that is absolutely critical. One reason for that is many people talk about algorithmic bias, the problematic of algorithmic bias, how do you eradicate algorithmic bias, how do you make it more accountable? How do you make it stop making sort of predictions based upon racial determinism, for example? But what we have to understand, there's no such thing as algorithmic bias per se, that's social bias. We program these systems, they implicate the bias of our society's into those systems and replay it back to us. So when we talk about accountability for algorithms, I think we need to talk about accountability for society and that's not just accountability for what we've unleashed through algorithms, it's also the accountability of what we've programmed into them, I mean how have we programmed into algorithms these biases? Now I talk about machine vision because I think machine vision, the one thing I didn't go into and perhaps I should is a lot of these data sets are already inherently biased, the images that are being extracted from these zones of conflict are inherently biased. They're going into data training sets that are training artificial neural networks to replicate that bias and predict threat based upon biased, over deterministic, racialized categories. So again, we have to ask a simple question, who's responsible for that, who's accountable for that? And you could elaborate out from that, Maya and I think you and I are, this is where Maya and I's work comes together. How do you develop a research methodology that's fit for purpose for thinking from within these apparatuses rather than just reflecting upon them? Now again, I think this is the critical point if we are going to talk about accountability, we can't just reflect upon this process. We can't just say it's good or bad. We have to enable and support critical thinking that thinks from within the apparatus, that deconstructs the apparatus, that deconstructs precisely that black box rhetoric. Because there's no such thing as a black box, I would argue. There is just modalities of obfuscation that serve military industrial complex, that serve the education, that serve the entertainment complexes, but we need to deconstruct that and part of that would move towards not so much accountability, but perhaps taking stock of where we are presently. So this is where I think, unfortunately, we've come into a loop that's like a literal loop and that's not a spiral or a coil because as much as we're saying these are problematic deterministic systems, we're still seeking accountability from those systems in a rather deterministic way. I recently, but I think that there are some maybe new attempts being made to do something maybe slightly different. So I don't know if many of you read this, but it's been very much in the news in Europe around a group from the Netherlands called Lighthouse that does investigative work. And I think investigative journalists have been quite interesting in this process of trying to look at applications of algorithmic technologies in social welfare, in finance and banking and different aspects of social, cultural and political economic life. And this was the case of the Dutch government trying to identify people who were welfare fraud, basically, trying to identify welfare fraud algorithmically. Now, and former Dutch government was actually had to resign and lost confidence over this because they were targeting people who were racialized immigrant categories largely, immigrant communities. But what's interesting about Lighthouse's reports is that, and I think the point I'm trying to come to is we're in a moment where we have open source intelligence, we're trying to get access to what the model looks like. We're still going back to those systems to say, if I could just unpack the system and see within it, then it's going to be fine. And we're still then reflecting on it, where it's very hard for us to have the language beyond a point to think in terms of statistics or data science. So this is what I mean by we have, we are in a little bit of a difficult situation because our social and legal and regulatory systems and infrastructures can't sort of keep up with that. And because we refuse to do societal reform. So we're kind of like, should I do something with the algorithm or in the system, but I can't also avoid it. And I don't want to also think about the work that I need to do in the social, because we're like, this is too complicated, this is difficult, right? So I feel like there's a real sort of tension there and it's problematic. So you and I had a chat and we've had consistent chats around the ideal scenario. As an academic, you wait for somebody to come along and give you the two million pound grant and they say, okay, get on with it. What would one do? Because, and I'm being rhetorical to a certain extent because I know what I would do and I think we know what we would do, but how would you plot that? Again, coming back to your question, because there is a real danger that we merely engage in this loop and we repeat the same critique which merely feeds the system itself, going back to your point about the algorithmic sublime. So you've got the grant, we're sitting around. What's day one looking like? I mean, how are we projecting this? Yeah, I like the way we're thinking about manifesting future work. One of the things I'm really interested in also is the work of, the work that language does, as I was trying to say, a lot of this exists in the realm of the imagination. I mean, even things like neural networks, the pictures we see, these are just visualizations of mathematical functions. That's not actually how they are, but we need it. It's a kind of language which allows us to make sense of the amazing, beautiful things, horrific things we have created. So I'm really interested in things like metaphors and language. And I think maybe we need to update some of this language. I don't think we're dealing in the realm of a black box. And we talked about earlier, thinking maybe of the black veil. Somebody mentioned this and I feel really awful about this, but I don't have the reference of who said black veil, please, it's not me. But it's something that I read somewhere. And the minute I read it, I thought, that's it. That if you imagine for a moment something like a bead curtain that is very material and forms a barrier, it is a barrier, but it's also something that tells you about what's on the other side. And I think we have many of these modalities that allow us to see on the other side, but we can't quite see. And the bead curtain is always shifting in the light, in the wind, and giving us a sense of what's in there. So I think that maybe even a lot of things have to start with kind of updating the way we approach our systems and think about them. And that's why for me, I mean, things like statistics are really hard. And I don't think that it's only the realm of data scientists and engineers, even though behavioral economists and psychologists would say it's a job for engineers. I think there's a role for people in culture. And the other thing is because of that, because it's about language and culture, I think the work of this interdisciplinary piece is really hard and is really important. And it's about trying to bring people who don't often work together to say that how do we create those new spaces to not just look deterministically back at the system, but also do that work of the social. And yeah, I would say that's why, artists and cultural practice, researchers. Yeah. And again, this is how Maya and I came together in the first instance through a mutual friend, Habba Wayamin. And one of the things we were thinking about was how can you use visual culture to rethink colonial histories, which is something I've worked on for 20 years, something Habba's worked on, something you've worked on. But how do artists speculatively, how do practitioners speculatively rethink those histories? And if you can offer a speculative model for thinking through those histories, the histories of colonization, for example, could you potentially project that as a interdisciplinary model when you're working with human rights experts, you're working with experts on trauma, you're working with legal experts, and you're working with a broader community, a collaborative community or a community of practice that can engage with what is, to my mind, at least one of the single biggest concerns of our generation. Yes, we have imminent climate collapse, but in the interim, we have something else that's profoundly taken over our lives, and we seem to have sleepwalked into it, very happily sleepwalked into it, and these systems are now generating not just massive amount of data, but they are consistently occupying what our future will be, what our potential futures will be. And again, I think that this is a concern because it's narrowing down potentiality. It's quite literally putting an end to the potential for us, or at least it's attempting, to assert the potential for us to critique that system, because ultimately, we are and generate that system. I think, I don't think I agree that we've sleepwalked our way into these things, but that's another conversation. I'm aware also that we could keep kind of going back and forth and we should sort of open it up as well, but I want to shout out to, I think a few projects that maybe show what is possible and groups like Syrian Archive, now known as Nemanik, started working in Berlin and now work with Syrian, but also Ukrainian communities in, and I think groups like Nemanik and Syrian Archive, people like Adam Harvey, I think in the context of geopolitics and algorithmic technologies, have been able to bring technical practice to relationships with communities on the ground and kind of like shift both methodology and practice, but also the cultural imagination, the language and the regulatory frames. I think 10 or 15 years ago, it would be very hard to go to the international criminal court and say, here are petabytes of videos on YouTube make, and this is my case, for saying that Pasha Al-Assad is a war criminal, that just wouldn't have been possible. It's possible now, but you have to do that kind of work and you need technical people, but that's why you also need the artistic vision within it. So I feel like these kinds of collaborations are situated very much in communities and driven by politics, and I think not just ethics, I think there's been such a co-optation of ethics, is, it will kind of like show us the way in new modes of kind of doing this kind of work. Very briefly on that, just a quick anecdote and then open to the floor, Shria. So when I was working with Shota Illingworth on topologies of error, one of the airspace tribunals that I took part in, which is the Toronto airspace tribunal, where people were giving evidence about lived experience of what it is to live under constant threat from lethal autonomous weapons. We had a four-star general from the UK army who gave evidence, and no, it was a stored-in thing, and the reason why he gave evidence and Shota pointed out to me and it hadn't occurred to me, she went to him with an artistic project, and she says, I'm making this project, it's about visualizing threat. I'm an artist. Now I think that discarded him or unguarded him or made him lower his guard a little, because if she said I'm a human rights activist or I'm a legal expert, or I'm a trauma expert, the doors are down. But when you go to people with visual culture or artistic practice or speculative practice, somehow there's more willingness to engage with the process. Maybe they don't see it as a threat, but that's your opportunity to get that information, which otherwise you would not get, you simply wouldn't get. So foregrounding it through how visual culture rethinks certain strategies of visualization is often a good process too. Shreya, I'm conscious you want to hand it over to the floor or to the online audience. Yeah, no, thank you. Yeah, I have questions, but we have so many hands on the audience, I think we'll start. Hi, Anthony, you started your talk by saying colonialism is about the wealth extraction and then neocolonialism is data extraction. If I just continue that mathematical equation, I can say if data equals wealth, therefore neocolonialism equals colonialism. So do we need to kind of funky up the name of colonialism or don't you just call it colonialism? And I mean, for me, I might not make it sound like neo-jazz and all of that. Another question as well is, what you said about the AI is a mirror to society, which is kind of for me is the same way as God is a mirror to society. I mean, if you read the Bible and the Quran, the amount of vengeance and hatred and anger that God has on people, and it's the same as human being have on people. I'm just sort of wondering whether has there been any studies about the ratio of how much AI mistakes compares to human mistakes? Like we've heard about even friendly fires in war. So is there a comparison? Like sort of when COVID deaths were happening, they only tell you how many people die of COVID, they don't compare it to other deaths that happen. So it kind of, it sounds big, but if you compare it to other cases of death crossing the streets or cancer or mental health, it's not that big. It's not that sort of, as they made it sound in the news. So I'm interested if there's any studies to compare AI errors to human errors? I'm sure there are. And yes, you're absolutely right because human error are friendly fire, it's often put down to human error. But again, there is this, as I was arguing, perhaps not totally originally, the brittleness inherent within AI will obviously replicate certain behavior patterns or indeed errors. But going back to your earlier point, yeah, colonization, wealth extraction, labor extraction, neo-colonization, it's exactly the same thing. You're absolutely right. Data is wealth. Obviously now when we, I remember I used to joke with my students, you know, what's the funding model? What's the business model of Facebook? How does it operate? And there was that wonderful scene when Zuckerberg was up against the U.S. Senate and one senator asked him, so what's your business model? And he said, well, sir, are we advertised? And we make money from that. Now that was in many ways the most disingenuous response he could have made. And everybody was laughing at this senator. I remember there was Orrin Hatch from Nevada. I think he was in his 80s and obviously he was the senator who was apparently out of touch. But Zuckerberg's answer is super disingenuous because it's nothing to do with money. Absolutely zero to do with money. It's the collation of data. And now six years after that appearance in front of the Senate, we really see where Facebook is going. The metaverse. What's the metaverse? The metaverse is data, period. Ownership of data because if you can manipulate and own that data, you can control the future. And that is worth more than money. So it's a very interesting sort of connection between brute wealth based on money and a much more insidious wealth based upon data and how that's progressing forward. And going back to Maya's point about that algorithmic sublime, what's happening to that data within that system? How is it being replicated? How it's being looped? How it's being coiled? How it's being fed back to us? And how we willingly engage with that system? And in fact, we are that system. We produce this system. When I work with online advertising, we used to call Facebook cookie dropper. You never make money from advertising in Facebook, but you drop a cookie in every computer that uses Facebook. Yeah. Thanks very much for the talk and for the guided meditation. That was very novel. As you made very clear in your presentation, algorithms which once handled only the linear deterministic paths of simpler mathematics is now becoming sophisticated enough to take on the more complex and probabilistic nature of reality. And it's inevitable that governments will use this to create simulations to predict human behavior and simulations to model future societies. Now, in that case, how do we as a civil society preserve our sovereignty? That's a very good question. Yeah, that's a tough one, because I actually don't think we do and we can in the ways that we have tried to traditionally. So I'm not saying those things are not important, but I feel like you know, it's a scale thing and it's a temporality thing. I think there are, like bringing up the example of Syrian archive, a project that started eight or nine years ago. Maybe yeah, at least eight years ago. It takes time and the sort of the civil war in Syria started in 2011 and look at what is going on there still. Unfortunately, I think these ideas of sovereignty and we have to respond now. Unfortunately, I feel it takes a really long time to do that kind of work and to, and by then, you know, the point about the technologies moved on. I do think, however, supporting these institutions that we've had of journalism, of the arts. I mean, that understanding of sovereignty also has to be sort of cultural and temporally sort of shaped. And it's not something that we can, or rather the place of demanding things on the street. So when the A-level algorithm fiasco happened in this country, now the fact that you had young people going out on the street and saying F the algorithm was the starting point of lots of articles and essays and it's a flash point, it's a great moment to kind of talk about, but actually the work of the sovereignty part, like what that actually means, I think is still ongoing. And so we have to, I think have a little bit of, we need more capacious, we need to give ourselves more capacity, our societies more capacity to think about what that sovereignty means in, and it's not disconnected from other things. I don't know if that's the most helpful answer, but I used to do a lot of this activism and I think one of the reasons why, quite honestly, I stepped out of it is because I was feeling very burnt out by some of the strategies around, what are you up against? And I feel like, I think towards the end, and we've talked about this, we've just talked about this, thinking about the affective condition of living with the trauma or living with conditions where you feel, under conditions where you feel, there is really nothing I can do, is a moment to also think about and take very seriously and say, maybe that's the site of protest and not kind of saying, I want the EU AI Act or the GDPR to give me my rights back or, and those have a place as well. So I hope that's some kind of answer or response, but I'm being very honest in saying that, I don't know, but I think maybe we need to look in some slightly different places. Sorry, just to follow on from that, excellent question. As Maya was talking, I keep thinking and I always think of Melville, Bartleby, the Scrivener, when Bartleby says the immortal words, I would prefer not to. I'm sure you're all familiar with Herman Melville's extraordinary short story where Bartleby literally refuses to be part of the system. Now, I'm sure you also remember the end of that, he's committed to an asylum where he dies. But the moment of disengaging, I would prefer not to, it's quite a radical moment in and of itself because you refuse to generate precisely, in those days, capital related to insurance. But it's an extraordinary moment of repositioning sovereignty to go back to your question, because when we think about AI today, and when we think about machine learning, and we think about generative AI particularly, because a lot of the argument we're having around chat GPT is, will it replace human creativity? But I don't think that's the actual question we should be asking. Who gives a damn whether chat GPT can produce a poem or an essay? Frankly, I don't care. What I do care about is how it's realigning the ontology of what we understand creativity to be, by which I mean how it's realigning us as subjects to think more like machines, which is the classic dilemma that Bartleby endures, because we increasingly through these systems are learning to see like these systems. And that is realigning our subjectivity, which is ultimately realigning how we understand the very notion of sovereignty. And that to me would be a concern, not whether it can write a poem, not whether it can write an essay, but how it will realign how we understand that moment of creativity itself. And if I can just kind of like, just say something in response to the refusal piece, I think that an interesting distinction can be made between modes of resistance and modes of refusal. And there's something I've written about with the collaborator Emmanuel Moss that if you think about resistance itself, and go back to this idea of recursion and contingency, the resistance of society gets absorbed back by those systems that were constituted by. So when you have, let's say, big tech saying that it wants to actually take on board questions of bias or fairness or online harassment, it's keeping the loop intact and it's kind of absorbing your modes of resistance as contingency and then reformulating itself. So refusal is one of those things that I don't know if it actually breaks the loop, but it's, or sets up another loop or another spiral or a coil, but it's quite interesting to think about the differences in those words and what actually constitutes refusal and what's resistance. And it's not so much just the act itself, but I think where it's grounded and what it's kind of about and saying, there are actually aspects of life prior to and outside of these systems that should not be subject to them or cannot be subject to them. And even if those things are really small and don't go up to the international criminal court, there's still acts of refusal. And I think that that's where that affect of saying, I also refuse to care about whether it has any impact or not. And I think that's the hard bit to say that it's not going to scale. My refusal will not scale. And I have to be okay with that in a system which is all about scaling. So gentlemen, over here, I'd like to end up in a while. Hi, I'd like to make a couple of statements and ask a question. First we'll ask a question from you, so I don't recall your name. You suggested that humans wanna end up thinking more like the algorithms that we were using. But I don't know what evidence you have for that since no one really understands where human creativity comes from when it happens. Certainly these large language models that are being used and others work in completely different ways to I think human and other organic minds. So maybe it's easier to think about these, maybe it's useful to think about these things as things which are useful. And if they are useful in enhancing creativity, then it will be used. A couple of other points is, I'm going back to this use question. If it's useful, it will be used by humans. Humans use tools. This is a tool. The only thing that's accountable for its use are the humans because these, I think, are completely unconscious machine. There's zero. As far as we know. At the moment. This is going to that space. And they're not owned by governments, many of them are owned by corporations. Those corporations are owned by shareholders. Who are these shareholders? Who is accountable? If a Tesla car is driving along and it sees a 38 year old hedge fund manager, she's walking across the street, its brakes fail. And it sees a primary school teacher, a woman of the similar age. And it just so happens as you were talking about. There's no fixed algorithm on value of human life and say that particular day is a lot of information about hedge fund managers causing a breakdown in society and teachers are being underpaid. Equal probability of killing both of them, right? The car plummets into the hedge fund manager and thereafter goes on to kill other hedge fund managers. So first of all, the evidence question, right? We do we really know? I don't know. And secondly, use. If it's useful, what's the problem with it? So the utilitarian argument to take your letter part, we can come back to. And you're absolutely right. We are technological beings. Since the dawn of man, and I don't want to go back too far. We've always used technology to ascertain and maintain an advantage for the purpose of survival period. We are, and I think this is the critical thing that we learned from Heidegger, we are inherently and ontologically technological beings. Technology is not an addendum. It is not an addition. It is who we are ontologically. Our being is technical period. So utilitarianism and all of that, absolutely. The argument I was making earlier and perhaps we can come back to the autonomous car and so forth, but if you go back and look at the history of AI, it had one abiding function and that is about predicting human behavior. And if you go back and look at the, what I think is the fundamental paper, it's Wayne McCullough and Walter Pitts towards a mathematical calculus of neuropsychological activity, where they plot through logic and through systematic analysis what a neuron actually does in the brain and they produce a mathematical calculus of that, 1943. That's the basis of Frank Rosenblatt's perceptron and the whole basis of that is in and around mapping mechanically how we think. That a function of that goes right back and it's the old kind of dystopianism that we in order to be productive must think and act like machines. And the other night I was watching as I do every three or four months Charlie Chaplin in modern times when he's doing that conveyor belt system and he gets out of sync and he gets sucked into the system and the system spits him out. But that great neoliberal model that if we think like machines, cooperate like machines, act like machines, we'll be more productive but also more predictive. Now my concern to go back to my point is that these GPT systems, whether they be large language models, LLMs, or if you look at mid-journey, stable diffusion, Dali, they are generating realities that do not exist and sometimes hallucinating realities. But to what extent is that moment of creativity not about creativity but realigning how we think about creativity which goes back to your original point. What is creativity? How do we know what that moment is? How do we map that moment? Now my fear is that that moment is being mapped. Just as they map the movements of workers in factories to make them more efficient, are we living through a time where creativity is being mapped to make us more efficient in our thinking certainly but more predictive and predictable in our thinking so we can be managed societally better. Now you also mentioned governments and private companies. I make no distinction, just zero distinction between Google as a private company and the US military industrial entertainment complex. I see them all as one. Maybe that's my mistake. Maybe one should be a bit more sort of discerning but effectively they are all working together and you only have to look at the game of musical chairs that's played with the likes of Eric Schmidt and so forth jumping from CEO of Google to suddenly advising the Pentagon to see precisely how that operates. So that would be where I stand on that and maybe on autonomous cars, 38-year-old venture fund. I mean hedge fund, sorry. No, I mean I think we don't know that that's actually how it's going to be but I think what I'm trying to say with my analysis is the shift that's happening towards statistical forms of reason and not philosophical inquiry is that it's gesturing to first the diversity and the shifting nature of what human values are in the first place that you're never going to always have one set of rules or even many sets of rules that will be deterministic but that the rules have to be made up in this very kind of reflexive, recursive way in this kind of coiling way and models and modeling is kind of how we know the world, how we know so many things about our world anyway. So it's really about that shift and not about saying this is who is going to be, this is who the actual actors are within those systems. I think I'm trying to gesture to a modality of how these things will operate and I think we have systems that are kind of already operating like that with the most likely probable outcome that will generate the greatest good in that moment. Yeah. Just because we're running out of time, we have quite a few questions online as one. Yeah, sorry, there's one that is quite related to this last question which is about language and terminology in relation to the kind of unconscious AI. So Anthony, this is from Bailey Card who asks, Anthony, you spoke about hallucination and pathology in relation to neural networks and Maya, you also mentioned them confidently exceeding their data sets through prediction. I'm wondering as humanities researchers, how do you navigate the language around these phenomena, especially using terms that intimate phenomenology, consciousness, et cetera, what's useful about doing so? Is it also possible that any ground is lost politically by doing so? Very nice question, thank you to whoever asked it. I think you have to be reflexive. I think you have to just keep knowing that you're using it and the limits of your language. So for me it's been a little project that I did on the metaphors of AI and many people are doing this now is to say that they work as self-fulfilling prophecies and they create the realities that we will inhabit. So many will say it's just words, it doesn't really matter and no, those words are really serious because they do create those realities. So I think all we can do is just find these ways to name them and to reflect on them and to be conscious of the ways in which you're doing it. I realize that there's, I know from conversations with computer scientists, friends especially, there's a set of factors on some of them and some of the more critical ones about they don't like any of the anthropomorphic words because they feel it takes them down a path where they have to really, they have to say this is not human. These are inhuman, non-human ways of being and thinking. So when humanity's people, when we are using some of this language, we're trying to evoke not just the structures of feeling but we're trying to sort of convey in an affective way, in a performative way, in a rhetorical way, we're trying to situate what these things are doing. So I think we have to, it's a dance sometimes and I know this from friends who I've, so for me that reflexivity is also saying to computer scientists friends, read my text and give me comments on it and finding things that resonate differently with people and saying, yeah, I realize that but I have a different role and I have a different sort of audience and community and I can't, being general and being legible to everybody is exactly the hubris of AI. And so you have to sometimes say, no, I can't be, I am going to be particular and specific and you have to come to where I am and understand why I'm using language or methodologies to work with language in a certain way. That's been very interesting. I would like to ask one question very quickly, one observation as we are running out of time. Firstly, I'd like to ask the question on ethics, whether actually being killed by a machine decision is somehow worse than being killed as an amount of human decision and is it just a matter of numbers? If not just a matter of numbers, why? Because these things go into black boxes and I would also say someone who has taken apart and occasionally constructed the so-called black boxes, they are comprehensible. It's hard in the same way that if you hand me a molecule, I'd spend weeks trying to understand what the damn thing did but they are comprehensible. And you mentioned the idea that we sleepwalked somehow and I think it begs the question of we, who we are. I was talking about this stuff in the 1970s. I'm like really old. So I've been involved in doing AI and large data systems for the government, I've done stuff for the military, all sorts of people. And we knew all about this years ago and I'm trying to find a more generic way of saying this but people from the arts side didn't want to talk to us and actually made personally offensive comments when the subjects were raised. And I would ask you with all due respect to listen to what you've said this evening and the value judgments you are making because one of the questions you asked was how should we start researching this and analyzing this? And I would ask you to look at how the value judgments you've made about the people in my culture and ask yourself how happy you would be to have said those things about an ethnic group, race group, gender or anything like that because it does not sound like anything resembling objectivity. If you characterize people as the bad guys then all your analysis is going to be, it's bad because they did it. And if I look at the stuff characterizing cyber warfare AI based weapons system and I compare it to someone discussing Caesar's campaigns where people are neutral. Even though Caesar was a deeply bad person by our value system, he owned slaves. He enslaved people on a massive scale and killed a lot as well. And I would ask you, can you find within your group people who will not seem to people from my culture like the enemy? Because once, if you think of people as the enemy you're going to find ways to bring their downfall rather than actually understanding them. I mean, can you explain more who and what you do? Because obviously, I mean... I've done a variety of systems for the government, various large corporations, I've worked in AI for my BT. I've done data systems for the government that do various things in various ways and for the banks. I've worked in this sort of stuff for many years. I've ended up being the vice chair of the conservative party science and technology think tank. Sorry, I apologize. Most of it wasn't in no way my fault, just honest. But I've worked in defense and I do find, especially with what's going on in Ukraine the characterization of anyone who works in the defense as someone that wants to napalm babies, which is not what you've said, but is within the sort of culture that you represent. I don't even find it offensive. I just find it... I'm just trying to think of a really non-offensive word. I just find it really quite naive that characterise people as if we just want to kill people. I don't particularly want to kill people. So let me be clear. But someone, a terrorist tried to kill when he was six. I was six, not a terrorist. They tried to burn me alive. I'd have happily shot him in the face. I'd have shot anyone who happened to be a member of that organisation. And you would identify that terrorist how? They were on the television. They were on the television. How would you identify that terrorist? Oh, at six I couldn't have identified anyone. I knew the groups to try to kill me because they went on the news explaining how cool they were. So, I'm sorry, you've opened up a whole area which I... Okay. That's a twist of a try to kill you. Yeah, maybe just a very quick response because we're going to... My concern, and my concern is the way in which these decision-making processes are being devolved. Not to humans in the loop. Hence me asking you, how would you identify a terrorist? Because increasingly that identificatory process is not being given to people, it's being given to machines. Now, very interestingly for me personally, most of the arguments coming out against the use of lethal autonomous weapons are coming out of the defence forces because they realise and have realised for at least 50, 60 years the potentiality of this technology to wreak utter destruction. And you're absolutely right. Those discussions were being had 50, 60 years ago in this country and the United States. That has not stopped, however, a positionality whereby there is and continues to be, to my mind, a move towards autonomous targeting and autonomous destruction based upon the rationalisations of AI. So, what is the problem with that? Because I can audit a computer. If a squaddie with a gun loses it and starts shooting people, he will make up all sorts of things. He may even have believed them. People under combat stress do things that simply aren't comprehensible even to them. But with machines, we can audit and we can debug. These black boxes are not immutable. So, you're comfortable with the mechanisation of debt. Yes, if you're on to Boolean, I would say. Well, that's your position. That is most certainly not my position. And it could be wrong, but I accept that other people have decisions. I don't characterise people on the other side. I don't know why you're coming from this other side thing. You use the word so-called really quite a lot. That's not an unobjective term to use. OK, I don't think I did that eight times, I guess. So, you're comfortable with the mechanisation of debt? We've had booby traps again since the Romans. That's mechanisation of debt. So, given that we're out of time, I think we will call it a day, but there is a reception and there's time to continue conversations. And I'd really like to say a very, very big thank you to Anthony and Maya. Thank you all.