 Okay, I want to take a minute again to thank Dare for the incredible coordination of the day. And thank all the panelists and all our moderators who a lot of them are still here and have been really paying attention the whole day. So thank everybody. And I'm going to take a minute. We have Kate Crawford over here. And she is now at NYU running a center called AI Now, artificial intelligence now. And this, you know, critique of mapping and data and computation all has gone very quickly nowadays into machine learning. And so she's taking on the critical side of AI at NYU. She also happens to be a very good friend of Trevor and is going to do a fantastic introduction. Can I just suggest today has been so extraordinary and that we just have a little round of applause to Laura and everyone at the center who made this happen. It's been amazing. It's been incredible. Thank you all of the speakers and everybody who's participated. It's been really wonderful. So my name is Kate Crawford. I'm based down at NYU. When Laura asked me to introduce Trevor, I was like, oh, no, this is getting increasingly daunting because effectively Trevor now has worked across so many different forms. There's photography. There's sculpture. He's traced out sort of submarine cables around the earth and is now building a satellite. So it's like, well, where do we begin? So I decided not to read out Trevor's bio. If you want to look at it, it's there in the program. Little tidbits include, obviously, in addition to sharing an Oscar with Laura Poitras that he recently was nominated to be a MacArthur fellow. So congratulations to that, Trevor. Very awesome. On a far more humble note, but we're quietly excited about it, Trevor will be the inaugural artist fellow at the A&L Institute. And this is really in recognition of the fact that Trevor has been doing a lot of work in machine learning and artificial intelligence over several years. To tell you a story, this was, I don't know, a few years ago I was giving a talk in Berlin and Trevor's like, come to the studio and I'll show you the system that I'm building. And he's basically building a bespoke machine vision system that would detect, if I look at your face, you'd stand in front of a camera, it would detect how you were feeling and your gender. And I stand in front of this thing and it's like, oh, okay, so Kate, you are 48% happy and 32% angry and 9% male. I'm like, ah, all right, but if you stand this way, you're 11% male. I'm like, oh, this is going to be a really interesting system that will make a lot of people self-conscious when they go to Trevor's studio. But I think what's really interesting and I think the consistent thread throughout his work has been really trying to find ways to make public the way that forms of power are weaving through our lives. Often that's infrastructural, sometimes it's discursive, and sometimes it's built into systems that you can really only find when you get into the materiality of making them. In the case of Trevor's emotion and gender detector, this went into a performance piece he created with the Kronos Quartet in San Francisco where you would essentially see the quartet and you would have these extraordinary projections showing what they were feeling, what percentage of emotions were they feeling while they were playing, what percentage of gender were expressed in their faces. But it was interesting because it shows the absolute failure of systems like this to really grasp what is happening when we have a quartet playing, the emotion, the facets of expression completely fall to the wayside of a system like this. So obviously Trevor is now training systems to see in particular ways. And he's been doing that by creating very unusual training sets of everything from Freud's interpretation of dreams through to monsters of capitalism. And it's this connection with the uncanny, with the ghostly, and with the monstrous that we will be hearing about today. So without further ado, I would ask you to welcome Trevor Packlin. Thank you guys. Thanks for sticking it out. It's been a great day, but it's been a long day and we're at the end of it. And so I will cut out a bunch of the fat in this talk and we will try to we'll get through it. So we won't get into weeds in some of the stuff that I was going to get a little bit more into weeds up a couple of things I have to say. First of all, thank you so much for inviting me, Laura. It's like we've been friends for a long time. I'm really, really happy to be here. I also have to point out that Kate Crawford, thank you so much for inviting me. We are old friends and we've a lot of the intellectual work that is that's done in this talk and a lot of this body of work has been done basically in collaboration with Kate. And so what one should assume in this talk if you hear something that's smart, it's her idea. If you feel like something is sloppy and dumb, that's my idea. That's the division of labor here that's going on. And and for those of you that are extra tired, I'm basically going to give a talk that is broadly similar to a lot of the things that Wendy Chun was getting at earlier today, but her she is so much more elegant that you can just leave now and watch that talk if you want to call it a night. Having said all of that, this is my introduction. This is the title of the talk, Monsters in the Smart City. Or how decades of cat torture created a class of vampires who weaponized Stalinist aesthetics and now threatened to turn cities into frozen wasteland. And oh, yeah, what to do about it? Spoiler alert, it's kind of hard in five sections. Section one, the city of invisible images. What we're seeing right here is a section of images, a series of images from a kind of standard training set used in machine learning applications and artificial intelligence and training sets like these are collections of thousands or more and more typically actually millions and millions of images that have and these kinds of images are used to train machine learning and artificial intelligence systems and they really become a part of our urban infrastructures, our global infrastructures, you know, smart cities our personal devices, our homes, that's a kind of their images that underlie how so much of cities work and they're part of what I think is taking places that kind of there's kind of an extraordinary revolution happening in visuality right now, which is kind of the perspective that I come at this from where there's this explosion of visuality that is just as important and probably even more important as the rise of mass media in the 20th century or the rise of the culture industries and spectacles and these kind of these kind of characterizations of 20th century visuality, but this moment that's happening now is characterized by something different. It's characterized by computer vision, by ubiquitous sensing, by artificial intelligence and by infrastructures that are increasingly behaving in seemingly autonomous ways and so what's going on here is that there's a kind of visuality that's emerging that's very different than the visualities of the past, first and foremost because it is largely invisible, right? And so there are examples of this kind of thing going on everywhere, guided missiles and drones that are engaging in autonomous kinds of warfare, smart cities using things like license plate reading to track the movements of automobiles through cities and highways, airports and smart borders, smart phones using facial recognition system to verify people's identities, self-driving cars, of course, using computer vision systems to navigate autonomously. Autonomous vision is built into manufacturing and quality control in logistic systems. It's a part of the most intimate parts of our life. Our social media obviously recognizes our faces and our friends and when you upload pictures to the cloud, it's not just faces that are being recognized, it's objects, gestures, places, products. Images are starting to make autonomous diagnoses of our everyday lives and beginning to analyze the number of calories you consume, how much you exercise in using this kind of data to make very intimate portraits of you. And so we have kind of arrived at this very bizarre moment in the history of images, one where the kind of traditional relationship between humans and images have become inverted. It used to be the case that humans would look at images and images won't look back, but now the images look at the humans and more often than not, the image itself is actually invisible to human eyes. So what this means kind of theoretically is that the history of what we've considered visual culture is very quickly becoming a special case of visuality in the kind of context of this much larger visual culture, which it has to do with autonomous sensing. And this kind of visual culture that's been developed by engineers and machine learning system is made out of images that are never seen and in many cases cannot be seen by humans. It's a very strange situation here. So this leads us to section two, a city built on the broken bodies of tortured kittens. Now, where does this autonomous, largely invisible visual culture come from? How it went into this one that we find ourselves so enmeshed within. The story I want to tell here begins in 1959 when there was two early neuroscientists and they wanted to figure something out. They wanted to understand how vision works. And so David Hubel and Torsten Wiesel started doing things to cats. What they did was they cracked open a cat's skull and attached electrodes to the interior of its brain. And they started showing lots and lots of slides to the cats and they wanted to see what kinds of neurons would fire in the cat's brains when they showed them different kinds of slides. They wanted to know how cats see. And the idea was if you could understand how cats see, then maybe you could learn something about how humans see. And nobody really knew what that relationship was between photons entering your eyes and the meanings that we ascribed to those photons. How does that happen? Do we perceive images as fully formed holes? Does the brain create 3D models of an object and try to fit those models onto subsequent images that it encounters? Or does vision do something entirely different? What they expected to find was they expected that if they put a slide of a fish in front of a cat, they expected the fish's brain to light up like crazy, get really excited, thinking maybe it's going to get a fish to eat or something like that. And they ran into a problem, which is that nothing happened at all. They put a picture of a fish in front of a cat. No neurons would fire. They were very confused. And what they started realizing, though, is every time that they changed the slide, the cat's neurons would fire in its brain. And the conclusion that they kind of came to, or the kind of hypothesis that they developed out of this, was that the neurons in the cat's brains were firing as a result of seeing the strong edges and motion. And that that was some kind of primitive form of neurological activity that was happening at the very fundamental part of vision. And this might seem like a trivial thing now, but this is a very important insight at the time that had huge implications for computer vision research. Now, this research was developed over several decades and expanded here in 1970s by Colin Blakemore. We regulate the visual environment by keeping kittens in complete darkness from birth, bringing them out for periods of controlled visual experience inside special cylindrical chambers. Here, a kitten is being exposed to an environment consisting entirely of horizontal stripes. And it wears a rough to prevent it seeing its own body. This particular kitten will be exposed to these stripes each day from the age of three weeks until it is three months old. All the rest of the time, it remains in complete darkness. What are the consequences of this restricted visual experience? This is the same kitten at three months of age seeing a normal environment for the first time. Watch the way that it responds to a shaking horizontal stick. The kitten can see it. But if the stick is made vertical, the kitten no longer reacts to it. Remember that this kitten saw only horizontal stripes when it was young. If we record from the cortex in a cat like this, we find that there are no neurons that prefer vertical edges. The cells will only respond to those orientations which the kitten saw when it was very young. Presumably, this explains the animal's unresponsiveness to vertical contours. So what this research in the 1970s showed was to develop Hubel and Wiesel's idea that there is a kind of theory of vision inherent in here. And that is that representations are built up from primitive shapes. And those primitive shapes kind of correspond to neurons in your brain. The idea is that when we see a fish, we actually don't just see a fish, we see a series of edges and small primitive shapes that are put together in such a way that allow us to recognize it as a fish. And that obviously that can be a frangible process. Those neurons are somehow formed through our experience of the environment when we are young as evidenced by the kitten who has never seen a vertical line as a kitten and therefore literally does not develop the neurons to be able to even see a vertical line at all. And so I was going to do a little history of computer vision but we're going to skip through that in the name of time. We're going to just kind of fast forward to the present, the development of computer vision, optical character recognition, emotion, age recognition. And we're getting to the point beginning in about 2012 we see the rise of computer vision working in tandem with neural networks and the advent of deep learning or artificial intelligence or what have you. And this comes out of two things that happen historically. One is processors get very fast. Two, you get access to massive training sets. We're beginning in about 2012 to have collections of millions of images that you can use to train neural networks. And that's the kind of moment where this stuff starts working. And neural networks work that are designed to do vision work in ways that are broadly analogous to this theory of vision that was developed in these cat experiments. The idea is you build a network that has all these neurons, you train it on a bunch of images, and it will form neurons that correspond to basic shapes like vertical lines, horizontal lines, other things like that. And that's how it recognizes objects by putting various forms of primitive shapes together. In my studio we've been developing tools that allow us to actually make synthetic images of what different neurons are. And these are ones. These are super weird shapes that AI just invented for itself to try to make sense of other objects. These are what are called the deep layers in the network, or these are neurons within those deep layers. At the output layers, those are the fully connected layers, as they call them, those are layers that correspond to all of the objects that a particular AI has learned or has been taught how to see. Once you build a neural network like that, you can induce it to make images of that. So this is an image of an apple that was made by a neural network that had been trained to see apples and other things like that. Once you've trained the neural network how to see objects, then you can give it an image it's never seen before, and it will say, you know, this is an apple, right? It's learned how to see it. It's never seen this image before, but it can recognize that shape as an apple. So this research done by Blake Moore, Hubel and Wiesel seems to have demystified perception in a certain way. It showed how vision could be reduced to a series of rather simple technical operations, and that more complex forms of sensing were reducible to these basic operations. But it seems to me that this work may have done the exact opposite, that instead of demystifying perception, perhaps their work actually went a long way towards training generations of computer scientists to think about perception in far too simplistic ways and trained the rest of us to not worry about the weird stuff that they're doing to cats in the labs over there. And I think that there is a kind of original sin in this research, and I think that, you know, Hubel and Wiesel and Blake Moore kind of undoubtedly believed that by setting up these completely controlled environments in their laboratories that they were, you know, bracketing out a bunch of noise and complicated contexts that allowed them to focus on the fundamentals of perception. But I think that what you could argue is that by creating that work under such artificial conditions, conditions that simply do not exist outside of that laboratory, they created an environment that wildly distorted their object of study beyond any kind of reasonable recognition because sensing systems like people never see in isolation from the social, political, and kind of cultural conditions that they are produced through and productive of. And unfortunately, seeing systems are usually deployed in the service of the vampires. Which brings us to section three, the vampiric city, or how machines actually see. Now, humans do not see in isolation. We have a vast background of cultural associations, rich histories of symbols, personal experiences, we have complicated subject positions, and all of those are present when we look out at the world. We see differently based on who we are, where we come from, and where we find ourselves. And in an image like this, you might see an image of a mantah, and I might see something entirely different, and maybe I'm even seeing something that nobody's ever even seen before. And the same is true of machines. They do not see independently of the kinds of functions they've been tasked with performing. Let's take something like this. Optical character recognition is a technique we encountered before. In, you can do this in a laboratory, and you can combine it with other kinds of computer vision systems to do things. In the early aughts, you started to be able to do this pretty reliably in the wild. In other words, you would be able to read texts autonomously that you would just see out in the world, other than in laboratory or controlled conditions. Now, how did that play out? In 2005, a company called Vigilant Solutions was formed to put all this together and deploy it out in the world. And their business model started out very simply. They would set up cameras on private cars, on police cars, on telephone poles, on light poles, and in the parking lots of supermarkets, apartment complexes, and big businesses. And the cameras do a very simple thing. They take a picture of every single license plate of every car and put that information into a database. And what they do then is sell access to that database to police, private investigators, insurance companies, whoever else wants access to it. And if you're the police and you want a record of all the cars that a Vigilant system has seen, they will sell it to you. And if you're a bank and you want to reprises a car, Vigilant probably knows where it is and will sell that information to you. So in January 2016, Vigilant Solutions signs a bunch of contracts with a handful of local Texas governments. And the deal goes like this, Vigilant Solutions provides police with a suite of these cameras that can read license plates for their police cars and access to the larger database. In return, the local government provides Vigilant with records of outstanding arrest warrants and overdue court fees. And this list of flagged license plates associated with outstanding fines are fed into the mobile system. So when a police car, using one of these systems, sees a flagged car, what happens is the cop pulls over the car, says you have an outstanding arrest warrant or court fine or something like that. It has a credit card reader and says, okay, you can pay the fine now plus a $25 service fee to Vigilant Solutions, or you can go to jail. And so in addition to that, then of course Vigilant Solutions gets a record of all of the license plates that are recorded by the overall system. Incidentally, Vigilant Systems signed a contract with ICE, I think last week, to do this at a national scale. Now, they've expanded their business in other ways. The next thing that they want to do with these systems is incorporate facial recognition into them. So not only do the cop cars want to be able to drive around and record all the license plate, they want to take a picture of all the faces and enroll all the faces in similar databases. Now Vigilant Solutions has some competition here. There's a company called Axon, otherwise known as Taser. And I want to give a little shout out to Ava Kaufman for the intercept here, because she's been really doing fantastic work on this. But yeah, Taser now called Axon, has gotten into the body cam business. There's a huge desire from the public to have police use body cameras, accountability. Of course, it seemed like a good idea. And Taser decided to make lemonade out of lemons with it. Fine, we'll make body cameras, and the body cameras can do facial recognition. They can enroll every face that every cop sees, build them into giant databases, et cetera. Now, I think a lot of people here have probably heard about this Chinese social credit score system. It's just kind of like a meta credit score. You have a score that sort of indicates like how good of a citizen you are. You pay your bills on time, your score goes up. You put nice things about the government online, your score goes up. You show up to work on time, your score goes up. You say bad things about the government, your score goes down. You don't pay your debts, score does down, et cetera. If you have a high score, you get discounted movie tickets. You get, not literally, you get discounted movie tickets. You get access to better visas. It makes it easier to travel. You get access to better schools. You get better access to municipal services. And the converse is also true, too, is if your score goes down, your life gets a lot harder. And people say, oh, my God, it's China. It's so dystopian. Only those kind of things can happen in a society that has like a kind of centralized political system in a way that China has. But that is absolutely not true. This Chinese system has a slightly different flavor than what would happen in a more wildly neoliberal system like our own, but the basics are the same. For example, let's look at what goes on in the US. Facebook has patents to modulate access to your credit based on social media activities and the credit scores of your friends. Obviously, there came out last week, I guess, that Facebook was talking about this yesterday about algorithms that Facebook has patented to assign you a class position, try to figure out what class you are. Insurance companies are scrolling through social media to look for evidence that somebody is a smoker or if they take selfies while driving or if they engage in dangerous sports or have unusual pets. Health insurance companies are modulating premiums based on Fitbit data, and all of this is only the beginning. So here's what it adds up to. Smart cities composed of like public-private infrastructure partnerships, whether that's Vigilant or whether that's Google or whether that's Facebook or whether that's Ford Motors, but these has predictable tendencies. One tendency, of course, is to create pay-to-play infrastructures for things that would have previously been considered public goods. Second tendency that's going to have is cash-strapped municipalities will inevitably rely on these kinds of systems to raise revenue, like we saw in these Texas municipalities, by charging for access to municipal infrastructures while the corporations providing capital for those infrastructures will take a nice cut, i.e. pulling over the people to extract the court fees and Vigilant Solutions getting the $25 bonus. And this will play out in predictable ways. It will play out by preying upon the most vulnerable populations through institutions like policing, ticketing. Think about the city as a kind of massive, autonomously running payday lending machine, or think of a kind of automated version of Ferguson, a city that had issued something like 35,000 arrest warrants for a population of 22,000 people as a way to kind of keep the entire population in a state of debt-punish, basically. So this is the Vampire City. This Vampire City becomes realized when ubiquitous sensing technologies are coupled with the kind of neoliberal city, a city where previously public services have been privatized or turned into revenue-generating machines in lieu of more traditional forms of taxation. And sensing technologies allow for this, allow for the colonization of moments of everyday life that were previously inaccessible to police and to capital. You can think about it as being a kind of analogous to a kind of enclosure of the commons, but where the commons are the most intimate moments of our everyday lives. And this creates a situation where the city itself lives by preying upon its inhabitants. So Torsten and Wiesel developed their theories of vision in a lab. What they were trying to do was develop a theory of vision that worked regardless of context, a more fundamental theory of vision that might be able to develop in the messiness of the world. But the ironic thing about it is that that lab, as I said before, was a highly specific context, one that bears almost no relationship to everyday life. And perhaps that laboratory context contributed to an implicit epistemological and aesthetic theory that has underlined computer vision sensing and artificial intelligence ever since. And even more ironic perhaps is that this pair of scientists was working at the height of the Cold War, and their research would be deployed in the great arms races of the Cold War, the great American machines of capital and the military, but the underlying aesthetic theory that they also developed had a weirdly deeply Soviet nature to it, which brings us to section four. The Socialist Realist City, or the Stalinist aesthetics of artificial intelligence. If we squint really hard and ignore the kind of ultraviolence and mass murder that characterized Stalinism, we can maybe almost sympathize with the philosophy of Soviet socialist realism. There's a kind of insistence on depictions of everyday life, the elevation of peasants and workers, the kind of uncomplicated realism of its representational strategies, the idea that art should be easy to understand, available to mass audiences, and should show us the path towards a more equitable society. Social realism, in my understanding of it, basically has two precepts. There's an assumption of a kind of more or less transparent relationship between images or representations and the things that they represent. An assumption that representations are natural, that their meanings are derived from a kind of objectively construed correspondence to the world, and we can call this a kind of precept of naturalism. Now, the second assumption is that these, this aesthetics should show us underlying social forces and work in the service of them. There's a kind of normative claim that aesthetics should advance a political agenda and that it should do it in such a way that's not obviously propagandistic. So there's a kind of idealist precept here as well. And when we look at what the implications of some of the assumptions of some of this aesthetic theory, we find some strange characters. If we take this assumption of naturalism, that truth can be discerned or ascertained through representations or through images, then this is very much the kind of aesthetic paradigm of the tortured kitten, of the artificial intelligence, of the invisible image and of the vampiric city. But it's most obvious expression, the obvious conclusion that one might have from this aesthetic theory would be physiognomy, right? And many forms of physiognomy have risen in parallel with the development of AI. In fact, I think physiognomy is a paradigm of AI in the sense that it's a paradigm of taking relational and political concepts and dealing with them as if they were natural ones. The idea that, you know, you can think about criminality as a social construct but it's a noun, the idea that a machine can see that noun written on your face because there's some kind of natural correspondence between your true nature and what you appear like. And we've seen research like this being done. This is a paper published in 2016 by Wu and Zhang, a paper called Automatic Interference in Inference of Criminality Using Face Images. What they claimed was that they claimed the ability to predict whether you were a criminal or not based on nothing but your face. This research was recapitulated. Someone was talking about Kaczynski earlier at Stanford of Cambridge Analytica fame who wrote a paper in 2017 that I'm sure a lot of you guys saw where he said that he was able to figure out whether you were gay or not by looking at your face with artificial intelligence. And Israeli company has kind of picked up on this and is kind of hawking this product that claims to be able to tell if you are a white-collar offender, terrorist or pedophile based on what you look like and other companies are sort of gentler but doing something similar here. It's effective as a company that tries to figure out your emotional state based on your face. We built one of these in the studio. But this one says that Hito is 57% female, 42% male, 73% sad and 17% disgusted. But what we must obviously ask when we look like a picture like this is that somebody has decided what 100% female is, right? Who has made that decision and on what basis which person is 100% female? Is Barbie 100% female? Is Grace Jones 100% female? Is RuPaul 100% female? There's a political judgment being made here that's hard-coded into the infrastructure. So this aesthetic of autonomous sensing and interpretation in many ways kind of echoes that of socialist realism. And of course it has its own flavor though. Now I used to call this aesthetic of machine learning and artificial intelligence autonomous hyper neoliberal mega-metarealism but was informed that nobody would use that as a term. And I said, I know I don't want them to I'm making fun of academics who think that neolugisms are like cryptocurrencies. And then Kate was like, well you should just like not have a word for it then. And I said, but I think we should have a word for the aesthetic paradigm here. And so she said, well you should call it machine realism. So we'll just call it machine realism. Capitalist machine realism or neoliberal whatever. Machine realism. We'll just do it, fine. But this is a kind of, it's an aesthetic model defined by the kind of autonomous attribution of meanings to images by AI in the service of capital or in the service of police. It's a kind of hyper realism that's designed to see and only see aspects of images that can be transformed into capital. If it can see that you're eating a Twinkie and correlate eating Twinkies with bad health and turn that image to capital in the sense that it can monetize that information by selling it for example to a health insurance company. A picture of a kid drinking beer can be sold to a car insurance company or local police or what have you. And so this kind of machine realism has a lot in common with socialist realism but it's far more insidious for a couple of reasons. First of all it produces meanings that are non-contextual. They're fixed. They're universal. A horse is a horse. And what's more it operationalizes those meanings in ways that the Nazis or the Stalinists could never have even dreamed of. It centralizes power among those who control the images and among those who control the meanings of those images. And so it's a visual system that lives alongside of us alongside us but it's making persistent and molecular invasions into our lives while being entirely almost invisible to us. Now this sucks. Just like socialist realism kind of sucks this machine realism kind of sucks too. And this brings us obviously to Renee Marguerite. This is not an apple. It's a pipe. And of course this has everything to do with the right to the digital city. Section five, let's bring this home. Incidentally I know that the version of socialist realism I presented in the last section was a bit of a character so don't worry about it and don't worry about explaining that to me after the talk. Section five, right to the digital city. So the kind of creation of the socialist realist vampire cities and their rapid expansion brings a kind of new urgency to a kind of insistence on rights both traditional and new. One thing Kate's done a lot of great work on is the right to do process in an increasingly autonomous and machinic environment. How do, whether it is police putting us in jail or social media company selling information to insurance companies that might affect our ability to get driver's licenses or healthcare. I want a city where we, where those who have the power to modulate our liberties need to have auditable, explainable, transparent and accountable systems. Two, a right to inefficiency. I think that we should have the right to decide that there's sections of our everyday lives or public spheres that we do not want to optimize. You could build a city tomorrow that would automatically enforce anti-J walking laws with nearly perfect efficiency. And that would be horrible, right? I want to think about a city where one can paradoxically have the right to break the law, not only because I like breaking the law and I like J walking, but because the history of social justice is a history of people breaking unjust laws in the service of a greater good. And a part of that idea is a kind of right to inefficiency in capital, a right to inefficiency in law enforcement. And correlated to that, I want a city where anonymity is a public good, where we can think about anonymity itself as being a public good. But above all, I want to live in a city whose aesthetics are more in line with those of Marguerite than those of Stalin. And the reason I'm coming back to this image is because for me, part of the point of... And Marguerite only... I never thought about it that much until I started working on artificial intelligence. But part of the point of this for me is that Marguerite's pointing out a kind of tension between an unexamined common sense that says this is an apple and the fact that the meanings of images are ultimately social and political constructs. And as such, they're subject to change. So the right to say this is not an apple is not just a semiotic stunt. The right to say this is not an apple is the right to say I am a man or I am a woman or I am neither a man nor a woman but I am a person. It's the freedom to say I'm a Palestinian. It's the freedom to say I am beautiful. At its most basic, it's the freedom of self-representation. And that freedom of self-representation goes hand in hand with the right to self-determination. Because meanings can change, it means that society can evolve and hopefully get better. Every single social struggle has been a struggle over meaning just as much as it's been a struggle over rights because the two are inseparable. Every social struggle has in effect been a struggle in part to make images mean different things. Social struggles for self-representation are exactly that. Struggles to define the meaning of your image. And this is one of the many reasons why this image scares the shit out of me. It's an image of meaning being fixed, a kind of freezing of space and time. It's an image of the biases and inequalities of a bad past being hard-coded into everyday infrastructures. And it perpetuates that bad past into a bad future. This is not an apple and this is not an image of an object recognition algorithm labeling an apple. This is an image of a city that has become frozen and whose inhabitants are struggling to stay alive. Thank you so much for having me. Questions of the day? Yeah, by the way, Trevor, I've been doing some neuroscience stuff as well and have been talking to some neuroscientists here with kind of similar points. They actually say it themselves that this one person who was trying to remember, it's about spatial memory and a lot of neuroscientists actually study spatial memory. And so the one, he said, well, how can anybody understand spatial memory if all we've been doing is having rats run around in mazes for the last 60 years? It was the same thing. So his solution, he's now studying chickadees because they are adapted to actually living in the dark but just because they have to then hide, they can wake up for three hours of the day and they have to hide their food. And so they hide their food in lots and lots of different places. And so then they go back to sleep and when they wake up again, they remember exactly where they've hidden their food. And so he's doing these experiments now and has built these sort of amazing cages to see, you know. So again, I mean, he had this incredible instinct but then based on that instinct is designing an experiment which is just going to back up, you know. Anyway, so this is the full front of science here. Yeah. Huh? So I... It sounds like the smart city, yeah, exactly. Okay, questions. Yeah. Hi, I have a question. I want to drink. I know you want to drink. We're over. Let's go have a drink. Let's do this question and... Okay. This was a great talk but I'm really troubled by where you ended. Yeah. The talk. Because I think it... There's the... There's a kind of visual history and rethinking the visual history that you provide and it seems to me that it doesn't sit with the kind of political history that you end with because you end with a profoundly liberal notion of rights to the city and indeed a notion of rights I think any kind of global reading forget about Stalinist aesthetics and the fact that the political history of Stalinism can't be reduced to its social realist aesthetics and or otherwise, right? But there's also a real... I'm very troubled by the liberal vision of rights to the city that you end with and this is about a notion of the right to self-representation when the entire kind of history... Right. Right. I mean because the entire history of indeed what you're talking about is that that liberal model of rights also doesn't work given the very complex visual history that you're telling us. So I'm wondering what space you have for rethinking the political vision that doesn't land us back into what I want to call a kind of secular white liberal notion of politics. Correct. Actually I am not presenting a narrative of how global rights are working here. I am an artist and I'm talking about how we work with images and so my kind of analysis is kind of limited to that. I totally agree with you that kind of like self-representation and redefining means is not ultimately a political project in and of itself. I'm just saying that's a part of most political projects and most projects to... most social justice projects. Right. So I'm not... Again I'm not creating an overall theory of how one can claim rights. I'm kind of picking up a little tiny thread of something that I see going on in the larger landscape and pointing out why I think this one little thing is a problem. Thank you guys.