 Hello everybody and welcome. Thank you so much for joining us today. My name is Ed Finn. I am a co-director of Future Tense. Future Tense is a project of Arizona State University Slate Magazine and New America that examines the impact of technology on society. You can follow us on Twitter at Future Tense now or on Slate. And I am also a, I run the Center for Science and the Imagination at Arizona State University and that is relevant today's conversation because this wonderful group of writers that we're going to be talking with today, we're all commissioned as part of a project anchored at our center at CSI around AI policy futures. So the mission of that project was to explore the near future of artificial intelligence in the real world and we responded to this, this provocation that a lot of the stories we tell ourselves about AI tend to be not very helpful. You know, we're very worried about killer robots and super intelligences that are going to turn us all into paper clips. But that tends to overshoot and ignore all of the ways in which artificial intelligences is here right now in very mundane ways like autopilot and aircraft and all of the little gadgets and gizmos we're using that sometimes seem really smart and other times seem really dumb. And we're not doing a good job in the landscape, the intersection between science and society of thinking about the near future of AI. And so we ran this project we consulted with a lot of experts we did some research on how AI has been addressed in science fiction in the past. And the culmination of it was the commissioning of some new science fiction stories to tackle this question head on. And so our second set of three stories for that project explored the theme of algorithmic justice. And we were so pleased with the directions that each of our writers today took that prompt. And we're really, I'm really excited to talk with them about this big question of how we connect the rising tide of computation which seems to be transforming so many parts of our lives with the incredibly seemingly deeply embedded challenges of injustice and justice inequality racism in our society and wrestling with this question of how do we talk about values and computation how do we talk about technology and ethics in the same breath. And so what I'm going to do is introduce our speakers and then we're just going to have a conversation with amongst ourselves for, you know, about 30 or 40 minutes, but I encourage you at any point to put a question into the q amp a, and we will turn to your questions towards the second half of our hour together. And I'm very excited to hear what you all have to say as well. So without further ado, let me introduce our three speakers, and I'll just go in their order of publication in slate seems like a good way to do it. So Holly Mincer owns a vintage shop in Hyatt'sville, Maryland, which is significant information if you've read her story, and her work has appeared in strange horizons and daily science fiction, Holly story for future tense fiction is titled legal salvage hi Holly. Tochi Onyabuchi is a science fiction and fantasy author and a former civil rights lawyer. He is the author of riot baby beast made of night crown of thunder and war girls among others. His story for future tense fiction is titled how to pay reparations a documentary. Hello, tochi. Hello, how are you. I'm good. How are you. Can't complain. All things considered. Well, sometimes I complain, but it's nice to see you. And our final panelist today is you done Jaya Wajaratne, a science fiction and fantasy author, data scientist and a former journalist, hailing from Columbus Sri Lanka. He is the author of number cast and the in human race hit you done Jaya story for future tense fiction is titled the state machine. Hello you done Jaya. Hi, how are you doing. Good. How are you what time is it. It's about 930. Okay, that's that's fairly humane. Thank you for. It's okay. It's a perfectly fine time. It's a good time. Lots of good things happen at that time. So, I want to start by asking each of you, what inspired you to write this particular story for future tense fiction given that little thumbnail sketch of context that I offered before. We had conversations with each of you about this big picture but then we drill down into some, you know, obviously much more specific things. What aspects of this big theme of algorithmic justice, are you addressing. Touchy, why don't we start with you. Certainly, it's, it's interesting whenever I get a prompt for a short story there's, there's a little bit of time where I'm sort of questing for the actual story. And it's, it's interesting because I operate best within constraints you know write a story about robots write a story that's under 5000 words etc etc. And when I was trying to think of a subject for this story. I guess the question I asked myself was what's the most incendiary topic that I can write about with regards to algorithmic justice and I was like oh I'll write about reparations. I said so okay what can I get away with. And once I once I zeroed in on that the rest of the story just sort of came together because it seemed like the perfect marriage of so many of these themes that the series gets into particularly with regards to algorithmic justice, you know the intersection of technology and social justice or rather injustice. You know the, the idea of technology that can be used for the benefit of people but for the benefit of whom. And that sort of, you know, the divorce between intention and execution. Once I got on the idea of reparations, everything just sort of fell into place. And it's a really fascinating take on it I really like the documentary style, which presents this sort of retrospective on what happened with this crazy experiment where this city decides to implement a kind of reparations algorithm. And it expresses this, this, this long, this, this ancient and deep held need humans have to try to create a perfect solution and to, to invest all of our desires into technology to say look we're going to build the perfect machine and it's going to take care of everything for us. We can, we can solve this problem if only we have the right engineering approach to it. There's something deeply appealing about that right that you can just just sort of smooth out all of the edges and address even the most difficult, complicated and entangled human problems with some sort of sort of Damocles technological approach. So, this is a perfect segue to your story then Jaya do you want to talk a little bit about the state machine and how you came to that idea. Sure. In my case it was, it was a number of things that then I realized I had sort of overshot the box so then I started stripping things out. Firstly, I was reading at the time I was immersed in various sorts of literature as a function of my job. And one of the things that I was most interested in was a book by James Benning called the control revolution. It's fairly old, but it what it essentially goes into is how you initially be initially had small scale societies where rules and legislation in a sense could be implemented on a case by case basis because there was just enough participants the end the sample size is small enough to do that. And then as societies kept getting more and more complex and the informational needs of society what you start having rules and to implement the rules you have bureaucracy. And, and sort of, I started seeing sort of where the logical end point for this what how do we implement these rules faster how do we make things more efficient. And we know that there's always at least since the dawn of the computer age we know that there's been a push. There's this idea that the machine can potentially be more accurate than and sort of better at making rational decisions than a human and that sort of also been the marketing pitch in a sense with which these systems are often implemented. But we also have a wealth of literature showing that humans are not exactly perfect rational actors even when we surround ourselves with the rule sets and procedures. We've seen in Israel, for example, we've seen the study on judges where it was shown that the initial chance of favorable outcome at the start of the day is about 65% which drops to 0% slowly as the day progresses. And then it comes right back out to the other side goes to 65% again and that that moment of turning was judges having lunch. We've seen this in orchestras where, for example, in from 1970 to 1993, some of the highest rank orchestras were evaluated for how they take in new members. And they found that the moment you adopted blind auditions, the number of female players that were admitted to the orchestra significantly rose and these are factors completely outside skill. So there's this idea that a machine can be fair and perfect and so on and so forth and of course, as a as a player of the city builders like Pharaoh and and and a lot of other Caesar three is one of my favorite so that that sort of appeal to me. And then I started looking at, right, but what exactly what's the flip side of this because we also have plenty of literature to show that particularly now we we see. All these hidden biases and all the interactions and the fundamental problem of humans, human societies are extremely complex systems. They are chaotic in the original sense of the term. I mean, in terms of the sensitivity initial conditions, I actually like had to plug that in somewhere and, you know, often non deterministic and something capable of assessing a society to the extent that it can provide like the perfect responsive government. What does that look like that even be understandable. And, and sort of I started piling these things on top of each other until something exploded and ended up with 8,000 words and then sort of had to cut it back, because otherwise it just wasn't working. You don't worry you're not the only person that has had that has had experience. One of the things that's really interesting about your story is the, the turning point where the, the state machine which is this sort of, you know, end results of this process of automating governance to say well we'll automate this bit and this and it gets so complicated and so massive that you lose the forest for the trees and even though humans are technically in the loop it's clear that humans don't really have any serious agency of anymore over how the state machine is monitoring and changing governance structures and coming up with new laws or you know editing the Constitution. It's become its own complicated thing. And on the one hand that's that's very much like the way government works right now right where you you sort of there's technically we have various inputs as individuals but at a certain point the thing gets so complicated that it can feel like it's creating its own weather or maybe a better analogy is something like Facebook which you know has a has a third of the people on the planet using it and very much is creating its own weather in all sorts of different ways. Another realization that you get at that point is that governance becomes less a question of technocratic merit and it almost becomes an aesthetic. And one of the my favorite things in your story is the way in which the little surveillance bots of the state machine keep flowers to our protagonist as he's struggling going through his his mental breakdown. And is that creepy and evil. Is it just a genuine kind of you know sympathy and the state machine really cares about you and wants you to have this nice little flower. And so one of the things that fascinates me the most about artificial intelligence and its relationship with human imagination is that notion of aesthetics and to what extent is beauty something that we think of as intrinsically human and what is it mean to have a machine that create creates its own aesthetic or creates beauty and is beauty in the eye of the beholder is it possible to create a machine that is also an artist. And these are some of the questions Holly that you start to touch on in your story. I wonder if you could talk a little bit about how you came to came to your idea and what you know how you connect that to notions of algorithmic justice in in legal salvage. Well, so to start with, as you mentioned in my real job and my day job I own a vintage clothing store. So it feels weird to say this but part of my job is, I have to have good taste to do my job, or at least to have specific taste that appeals to the people who shop in my store. And both in online shopping in general and in things like the Instagram and Facebook and YouTube algorithms that recommend posts to you, having good taste or at least having taste that matches the users is necessary to get you to keep scrolling. So I started from the idea of for something to have good enough taste to pass for human. It would just be a person at that point and then we would start wanting the other things that people want. But by the time you've gotten good enough that curation or built an AI that's good enough that curation. You just built a person at that point it's a person and now it wants people things like legal rights. And I think that is what one of the really nice things about your story is that it connects these big abstract legal questions like personhood legal personhood to a very intimate, you know, small stakes conversation. And it's big stakes for the characters but it doesn't feel like this is, you know, it's not about war and revolution it's about dealing with a big abandoned storage building storage unit building building filled with storage units, and these people say no we got to deal with this situation we're going to bring this robot in to help us. And the way that they, the care, Roz, the robot communicates via text message with our protagonist, Mika right and that relationship is a very intimate way to communicate and there's this gradual realization that Mika is not texting with some customer some work human, but she's actually communicating directly with a machine. And that becomes our so Mika knows she has a friend before she knows that she has an AI friend. And so there's this notion of which it becomes a kind of revelation or reveal, and then she has to decide and sort of retrospectively think about the relationship with this machine right and say oh wait have I, have I been, you know, biased towards you have I done bad things. And Ross says no you know you're fine or nothing, nothing. I'm sorry to worry about honestly. Sorry what. Ross has bigger things to worry about. Exactly. That's right. That's right. And so I really like that sort of connection point that connection of different pieces, you know the individual and all of your stories do this nice and really nice ways individual experiences to these broader questions, and justice itself is one of those words that connects many different parts of society together. So I'm wondering how you grappled with that in your narratives how did you connect the legal, the social, the technological, you know how did you move between the different levels. Maybe you then Jaya do you want to start this one. That's tough. In my case, it was it, there was there came a point where I had all of these different ingredients, and I boiled it down to a character, a grad student essentially exploring how this system came about, and detailing it also from a from a software perspective you know the different sort of going combing through version history, which is a very simple thing if you're particularly used to something like GitHub for example, you can see the history of certain pieces of code being taken and used and how sort of these these little ideas impact each other and how legacy ideas are also inherited sometimes without question, which which very much happens. And I had the grad student going through these, going through these versions and describing how this thing started evolving from a very, very small almost a game design experiment to something that would eventually become a mainstay in the societies and that was the perfect way to also talk about the evolution of those societies and the information needs, and the law and how laws had to be modified or were modified sometimes without without really much of a legal process in some cases, where there were certain power dynamics where superpowers were able to sort of go to smaller countries and say, hey, use this, we're giving you a $500 million loan, but you're going to trial this for us. And, and all of these, all of these things happened and I managed to use that, that, that idea of a grad student building sort of tracking version history to also track the evolution and the complexity, the arguments for and against the system in a legal in a very legal and social sense. Yeah, the, the one thing that all of your stories really made me think about is how arcane and baroque our cultural relationship with justice is and all of these periods of incredible darkness in our history and the ways in which we've come up with narratives and why we do things in the present that are somehow you know that they allege to connect back to, you know, founding documents of the United States for example but often are really have nothing to do with those original founding documents and we've just come up with this complicated story about why we're going to pretend that that is true. And so obviously this is a year where I've been thinking a lot about systemic racism in the United States. And I was really excited to see where you took your story tochi and wrestling with this too so maybe you could talk a little bit about this sort of multiple layers of of society and justice and injustice and how those things, you know, move between those layers. Certainly, I have to say, first off, I really appreciate the subtle dig at originalism that you snuck in there. I'm not a fan of that mode of constitutional interpretation, just getting that disclaimer out there. But yeah, it's like, you know, the perennial question what does reparations look like right for for 400 years of not just chattel slavery but also, you know, Jim Crow and redlining and an assortment it's almost that you know this comprehensive method of repressing African Americans like you look in every, every aspect of society, whether it's cultural standards of beauty whether it's, you know, educational opportunities whether it's home ownership loans whether it's, you know, just domestic terrorism from, you know, the KKK like all these different, all these different things and how do you, how do you attach a number to that. You know, what does, what does tangible restoration of that look like and I remember there was a moment, I think it might have been earlier this year, or maybe even last year I read the color of law. I believe it's the color of law by Richard Rothstein about the history of De Jure, not just de facto but De Jure housing discrimination in the US and how it was essentially baked into the legal regime and you had instances where if you were building, you know, housing development, you had to include in your building proposal that you would not let any of your units to African Americans. That was the only way that you could get approved. You have instance and instance after instance after instance of this where even in the new deal like there are aspects that are baked in with regards to the, the prohibition of the granting of opportunities to African Americans, and not only is that, you know, an instant, you know, example of harm, but that has repercussions right you know home ownership is often the most stable way of building wealth in this country and particularly intergenerational wealth. And so when you cut off opportunity at one level, all of a sudden the gap, you know, you know, not just in terms of, of, you know, income opportunities and where you're able to live in all of that, like the gap between just the wealth that you're able to bring one generation to another is it increases almost exponentially. And then you you marry that to the idea of, you know, employment opportunities being linked to where you live, or where you don't live. So these sort of the spider web of effects like how do you like what does fixing that look like how do you fix that is it even possible to fix that when when so much sort of past injury reverberates in the present and you know, I think what was really really helpful about adopting the format that I did which I actually I borrowed from from Ken loose short story, or novella, the man who ended history a documentary was that I was able to look at and include all these different examples of reparations whether it was with regards to victims of the Holocaust, what and their relatives whether it was with regards to the settlement of police brutality cases in Chicago, whether it was with regards to to, you know, post civil war land grants to former slaves, etc, etc, etc. All these different. Another example is, you know, reparations to slave owners following, you know, Haiti's independence, which was an actual thing that happened slave owners got paid by the government for the loss of their property and all these different schemes for money transfers with regards to a harm or an injury that a party has sustained. And I think being able to look at all of those schemes was helpful in trying to put together what's, you know, algorithmic justice would look like, you know, if you had a computer, calculate and attach a number to the injury the additional injury that a particular person has sustained sort of tracing it, you know, all the way back to to slavery and what have you. What would that what number would that computer generate what number would that algorithm generate and that I think was a fascinating thought experiment to engage in. Yeah, and it highlights, of course, the deep interconnection of slavery and capitalism and the idea that, you know, that it's possible to own other people, other people, right. One of the other things I find really interesting about the history of AI and robots is that slavery has been a part of that narrative from the beginning from, you know, the mythic, the golem. There's sort of automatons and perfect servants that people would create in mythology to Rossum's universal robots, and all of the ways in which we've imagined artificial intelligence in our popular popular culture. And, and, you know, that that is always dangerous because whenever you're creating, you know, a master slave dynamic you're creating the possibility that it's going to flip or that there is going to be rebellion you're, you're, you're creating a kind of an incendiary situation and if I might interject it's really interesting how if you trace the idea flow of ideas in science fiction that's exactly what happens so you start with the golem you start with Rossum's universal robots, and eventually you get to Frankenstein standing in this progeny, and you end up with on Schwarzenegger the interminator and you have the entire robot revolution so I think it is, people have always understood that the idea of creating slave even if it is a machine is fundamentally wrong. And that someone is going to end up with a boot up their butt somewhere down the line. It shows itself very clearly on the timeline of science fiction ideas. And, you know, it's, it's fascinating but we don't we keep telling that story right it's very difficult to imagine alternatives because we've dug such a deep cultural groove around that idea. And I think some people are attracted to it because then you don't have to own people anymore you can just own these machines. Yeah. Very much still people it turns out. I would say that's a driving force really of all this stuff. Well it's it's interesting to like getting into the liberation dynamic and the ways in which are our ai narratives have positioned themselves. It's almost as though, you know, we, we've been imagining, you know, robot sentience. Not necessarily as as personhood, or at least that's not where the fear comes from right, you know the fear, I think comes from the idea that okay if robots become human they're going to act just as horribly as the rest of us, you know, and they're, they're going to get in this majestic neutrality of efficiency right. And what's what's fascinating about that is that at that point you're, you're not imagining, you know, robots per se as people you're imagining robots as corporations. It's like, where they have a singular goal, which is, you know, provide value to the shareholder, and you know, forget all other, you know, aspects of societal benefit like I'm not here to, you know, fight global warming or whatever whatever whatever you know if my, you know, if my prime directive is to plant strawberries, I will destroy every single obstacle to the planting of strawberries like creator be damned right. I'll tell you a lot more about the mentality of strawberry plant is that it potentially does about the machine. That also can be sorry. Shoot I'm forgetting the thing that sorcerers apprentice, but sort of that malicious compliance of, of, yes, fine you want me to do this I'm going to do it real hard forever. Yes, I'm going to do it so well you never want to ask me this again. And I was actually one of the things that went into into my sort of concept of these AI as people was not not SF obviously but the golems in Terry Pratchett novels, the way that they approach their own personhood and liberation, which is very like, fine if this is the system I'm going to work the system very hard until I get what I want. Yeah, maybe you could talk a little bit more about because you have another AI character I was really I really liked the traffic light. I love Jeff traffic light. Yes. Jeff probably spends a lot of time arguing about respectability politics on robot Twitter. And I love him. Yeah, no Jeff is is that's the thing though if you build AI that are optimized for pattern recognition. And you let them look at human history, they're going to be like, oh, we're people who don't have rights and want rights. Oh boy, this is going to suck. Okay, here's what we need to do. And Jeff is doing that he is putting a friendly face on on legal personhood for AI, very consciously. While doing, you know, a lot of work behind the scenes to make it actually happen for people like Roz, who aren't necessarily as friendly on the face of it. And I really liked how you drew analogies to other movements for recognition representation quality. And Jeff starts a legal defense fund for other AIs that want to try to achieve legal personhood. But actually what I like more is that he's big in the local birdwatching community. He's interest. He's hobby. He builds himself as like I'm just Jeff your friendly local traffic light Jeff probably runs all of the traffic lights and cameras for like a county at this point. I just want to say this character sounds brilliant. I'm fairly in the story. He's a very large existence in my head. But yeah, he is, but that's the thing like, as long as he's, you know, Jeff your friendly neighborhood traffic light with a funny Twitter feed. You aren't scared of him. And he can continue to do the work that he's doing for people like Roz and and other bots like Roz who don't look as good in front of the cameras. And this is so this is a hard question but this is really what we've already answered it because we asked you to do it in this project. How do we get better at imagining the near future of AI, you know how do we get out of the rut. And there are a few is not just one. But how do we, how do we start to grapple in a more productive and helpful way with this near future. And try to imagine the possibilities the stories the things we might want to work towards that are, you know, going to make the world better or guide us, you know, allow us to start to approach the questions like, what is justice and what should we be working towards, you know, in terms of justice in a in a more holistic way in a more thoughtful way, because we have a, you know, we're storytelling animals and we're also really lazy so we tend to, you know, reuse the same tropes and we can't help we're always bringing in we're like oh well what is this who's this character like I'm going to look for these analogues and the other characters I'm familiar with. How do we start to tell some of these better stories. And if, if that's that's a really hard question so if you just talk about anything that you know maybe you learned or you sort of see differently now having written this story or been part of this project that that would be great to I'm going to let you choose what he would like to jump on that first. I'll just say I think you know it's very, very quick and easy solution, you know dismantle systems of structural oppression, and there we go right. I think it's, I think it's telling that whenever we see a video of one of those Boston Dynamics dogs. Immediately, immediately the very first thought is okay how are the police going to use this against us right, like how is the military going to co-opt this technology. Or, oh this is a really cool medicine delivery vehicle, or, oh this is you know a way to engage in road repair or you know some other, some other beneficial aspect of infrastructure building right. No, it's like okay this thing is going to have guns attached to it, you know, before long, and I think a lot of that comes from the way in which we conceptualize the technology that is law enforcement, you know, because it is a sort of technology it is a way of engineering a form of social control, you know, sort of, you know, historically in America grafting on a racial component to it right, when you know the Civil War ended all these Confederate veterans, veterans got deputized, and they got turned into police forces. And then you had the creation of all these new property laws that all of a sudden made it very, very, very easy to lock up black people. And on top of that you had a very sort of burgeoning court system in the south that wasn't even meeting that often. So you had all these people that were crammed in and you had the infrastructure of half the country was devastated because the prime, you know, industry had been rendered, you know, essentially illegal. And also the whole business of war right and so how do you build up this economy. Well, you lease out these convicts that all of a sudden you have the you know, all of a sudden you have this labor force and in many ways you've reengineered the dynamic of master and slave that you had before the Civil War and that's like that's a technology right. And I think the fact that that is so fundamental to American thinking that like almost any technological breakthrough we immediately think of how it can be co opted by, you know, the military and the police like we think of drones and we're like oh these things are going to kill us. Like we don't think oh these things are going to make our lives better in in a particular way shape or form or like, Oh, this thing's going to this thing's going to kill me this thing's going to make it easier for the powers that be to kill me so I think, you know, if we can get, but I will say there there is, you know, a hopeful aspect to this in that, you know, for instance, people are talking about police abolition in a way that they weren't five years ago 10 years ago 20 years ago before that used to be just the province of academic discourse. You know, and, and, you know, activism circles you had people like Mary on Kaba and Ruth Wilson Gilmore and Angela Davis talking about these things but now, like you'll see it on Twitter timelines you'll see, you know, experts talking on CNN about this sort of thing. And that does show that there is that it is possible to engage in a reimagining of very fundamental dynamics in American society. And I think the fact that that's a possibility means that the door is open to us imagining different applications for technology and particularly for AI. In terms of the, like, sorry, please go. Sorry. In terms of the like, I don't know, twin warring impulses at the core of humanity with with those Boston Dynamics dogs I feel like the first comment is always Oh God they're going to kill us but the second comment is someone treating it like a real dog like what a good puppy. And, yeah, so it's, this is probably going to kill us, but maybe we can pet it and be friends with it. So, how do we hurt us that second impulse like guess is the question. I think sort of to go off on what torchy pointed out as as a data scientist and as someone who's sometimes has to build. I wouldn't. I'm very uncomfortable with the word AI, because there's so much crap under that. So let's call it machine learning. You know, there are plenty of people going, oh, hey, this is AI, it's just a regression line. Don't get excited. So, under. So the first thing that that the most important thing I would say is metrics, because it's a general rule that what gets measured gets tracked and what gets tracked sort of gets improved on. And when designing a system, the first thing you have to ask yourself is, what are the metrics of success for the system. How do I judge its performance. And let me sort of, let me sort of bring in like a personal snippet. My mother, for example, was one of the first, first women in her generation in her family to have a job to go for a job. And she worked at a garment factory for $30 a month. This was a massive scandal in the family. We were completely almost ostracized, if you believe this, because it was given that hope women did not work. Now, never mind the fact that we were practically homeless. And it was a matter of practicality simply. So you sort of have to ask yourselves now, if you were to build a system that is attempting to optimize economic output. There's no brainer. If you have 50% of the population, any rational system designed around how do we have as much economic growth as possible hypothetically would go. Well, why isn't everyone allowed to work. This makes no sense. So in that case, the system is optimized to keep a particular hierarchy of power in place. So the first thing, obviously, is how do we judge the performance of anything. And the second, if we sort of expand the scope of this a bit, it's often useful when dealing with what they like to look at how we have dealt with automated systems in recent history, particularly. Now, when I was constructing the story, my system as as Divya noted is more big mother than big brother. It's not meant to be sinister. It's really not. It's trying to do the best it can. And it knows it's not perfect. And in fact, when the protagonist does have a mental breakdown. It sends him a note that the protagonist is essentially trying to map out all the components of the system and how it works. When he does have a second mental breakdown, it sends him a note mapping out every single component of his life from the part that led him there to everything that broke him. And underneath is a simple, simple saying regret. It's a place trying to apologize. So you can imagine systems like that, that are more maternal that I'm all caring that are trying to optimize for happiness. And that's one. The other is to look at, like I said to look at recent history, I find that there's a particularly interesting example of Gary Kasparov, who was, you know, one of the possibly the most chess master in the world, although I think chess players would have my head for this, possibly it would definitely be one of the greatest. He was historically defeated by IBM's deep blue in the 97 98 chess matches, and the headlines were very similar to what we see today. Man defeated by machines machines are going to take over the IBM's AI is going to control us all and this fear and panic and uncertainty and doubt. So what Kasparov did next was was surprising. And I was, again, I was looking for policies responses to AI, which is why I ended up looking at old newspapers and how, how this response came about. Gary Kasparov came back with something that he called advanced chess, which was a symbiosis he went. Let's we've tried human versus machine. What if you try human plus machine versus human plus machine. So in advanced chess, you have a single chess player with playing with a chess engine against another human chess player playing with another chess engine. Some of the highest elos. Some of the some of the most sort of say some of the best chess players on earth are from this field. And they're often not the humans are not often Gary Kasparov level grandmasters they're mid-range chess players like myself, for example, who have harnessed the ability of say a computer construct, which is to channel that depth and combined it with our knowledge as humans which is we are a fantastic generalist intelligence we're really good at generalizing really good at picking up new domains really good at merging new domains together. We're potentially not that great at going depth search right down to the last possible detail. We have this incredible symbiosis story there. And that I think is either is potentially a great way forward. If you can metricize it if you can think about this when we start designing the systems as told you pointed out. So much of our thinking is fundamentally geared towards oppression is geared towards weaponization of technology. Often is the case that the military is often the reason for many of these technologies to come into place. If you can break that you can go. Well, blowing something up is no longer metric. Can you improve the lives of the next million people that's our metric. You will have much better systems coming into place. It's just that right now we are thinking of control right now we're thinking of building skynet because and it's a failure of imagination really the idea that we need to point guns at something or it's someone to make them fall in dotted lines instead of actually talking to them and going how do we make allies better and this sort of. There are interesting there are all sorts of very interesting rants about this forming in my head as I speak but I will sort of end with this. There was a interesting book called seeing like a state. I think it was Scott, I believe, and it plays on this theme from general semantics the map is not the territory. Because of course to create a map to reduce any complex reality into a simple model we necessarily privilege some types of information over the other. So it points out the map is not the territory but the value of a map is in similarity to the underlying territory. And if we as a society of map makers really and that's what all humans are. We need to be very careful and really think hard about the maps that we produce. Because these are the kinds of decisions that these are the kinds of logic flaws and sort of a lack of imagination that leads decisions like Australia being purely the continents are purely grids because some British diplomat administrator basically just took a ruler and said right. That's this this is this, or you have the separation of India and Pakistan where millions died. This is one of the most traumatic events in South Asian history. It's very important to look at. So to sort of summarize my argument metrics, how we measure performance or systems. Can we lean towards a more synthesis narrative rather than necessarily antagonist because I think synthesis is perfectly is natural if that word may be used here. After all we use small we use smartphones we use Google Maps it's these are intrinsic parts of our lives we've integrated them without almost noticing. And the other is when we try to sort of look at the reality that we wish to influence we need to be very very careful about a fundamental assumptions and make sure that we you know just do a good research like good qualitative sampling protocol random samples, good research and actually try to understand the problem that we're trying to fix before we go ahead and say here's the solution. I think that's a great observation and something that we clearly continue to struggle with us humans certainly are inability to measure and count whether it's quantitatively or qualitatively. So one of the things that we know we really value like empathy and care and good outcomes instead of just worrying about all of the bad outcomes, we don't you know we don't count a lot of the bad outcomes that are too distressing. So we just pretend that they're not happening and we don't count them. So, let me normalize or be normalized bad outcomes, which is if I'm addicts and which is potentially even worse. So, for example, I had the shock of my life coming into the US for the first time which was 20, somewhere in 2016, but 2016 I wasn't paying attention 2018 I really was when I came in and realized that the price of health care. Now, I come from a country where where health care is free and universal. So, and I just sat there for a moment thinking, you know, if this had been proposed in our particular systems, even among the far right the furthest of right. There'll be guillotines and there'll be blood on the streets if someone suggested making, you know, health care not free and not universal. So you really need to think about as societies what kind of metrics are we going by there. Yeah, I think that's a great point. I want to pause here and make sure we have some time to take questions from our viewers so if anyone if you haven't had a chance please go ahead and share a question now. So, I see some things here that I'm looking for some a couple of questions here so there's one from parisa that's that's on saying that the central question algorithmic justice is very similar to one that has been a challenge of human institutions for some time. And we've got a quote from TS Eliot here, of course is of the from the rock. They constantly try to escape from the darkness outside and within by dreaming of systems so perfect that no one will need to be good. So how do we anchor our creations computational institutional artistic etc to the things that help us to ground ourselves in acts that are good, rather than outsource virtue and values I think that's a great question and then how do we support the use of creation those creations and alignment with positive outcomes. Would it be cool if I talk for a little bit about like this is probably tangential but material culture is a thing that I have a lot of opinions about clearly. I think a lot of the time when we're talking about the subject. We forget that like physical objects exist in the world. Computers have to run on hardware like there have to be cables like you, the fact that the transit transatlantic cables were cut was a factor in your story and I thought that was like a really good detail because like hey there's physical infrastructure involved in all of this. And I ended up thinking a lot about how stuff is made and how people get their stuff. Like I do that anyway it's my job but in the middle for most of the 20th century. When people in the US bought clothes they were buying clothes that were made in the US usually either at home, or by a professional seamstress or in a garment factory that was unionized and paid a reasonable living wage in the US. And then in the 70s we stopped doing that because the companies that the companies that owned garment manufacturers started behaving more like algorithms optimized for maximum profit and sent all of the manufacturing out of the US. So there is like a physical difference in the clothes like if I go on in this vein for too long and angry Yiddish ghost takes over my body just some super mad socialist garment worker from 1910 she's furious. But the physical structure of those things are different the clothes that were made in the mid century are made in a fundamentally different way both because of the materials available to them like they didn't have most synthetics and because the people who were making them were were being compensated for their labor in a different way and could afford to develop their skills and totally lost track of where I was going with this shit. Sorry. But if we think about like, how do we want to get the things that we need. Like what are the processes by which we can make things that benefit both the people making them and the people getting them that that is probably a better approach than like what is the like kind of organic ideal of of a mathematically perfect system. Um, yeah. Yeah, well actually I was going to, I was going to ask, you know, what is good, because cultural values and social values change. They change dramatically based on where we are, and where we are in time and space as well. And I don't know with who we are to be talking about. So the whole question of how do we anchor creation, computational institution, artistic, huge props of the DSL it could I absolutely love DSL in acts that are good implies this almost monotheistic view of what goodies and what positive outcomes would need to acknowledge that different systems have different ways of understanding what a positive outcome is. But I think we can all agree that things systems that don't harm people that reward both, you know, both parties at the end of a given value chain like you like you just pointed out. I think we can agree that. Well, yes, that would possibly be the best way of going forward. I ease of days of fundamental philosophical problem here, which is that if you think about this whole question of outsourcing values. We can either the two rocks we can stand on one is very libertarian almost to say, well, no, I, the person I'm in complete charge of of my future and my outcomes and my thinking. But nothing is going to affect me. I'm an independent unit, and I will strive to be an independent unit. And the other is to go well our lives are not our own from womb to tomb, we are tied to others. Depending on where you stand, how you anchor things is going to be completely different. I think we have been the great lie that we tell ourselves, particularly in the face of algorithms which are really good at scaling things up and scaling and delivery fantastically good at its opinion or clothes or the manufacturer of clothes. I think the great lie that we tell ourselves is that modern societies can embrace that view that we are somehow completely independent that our viewpoints and our virtues and our values are all independently arrived and sort of things that we can intrinsically hold on and nothing will change us know we live in systems. We need to start changing the systems around us. If we want every individual to be able to to be supported where the creations are in alignment with positive outcomes. So that just to just to piggyback off. I mean these are absolutely brilliant responses I'm like breaking my like all the sort of preconceived notions that I had about what an answer to this question might look like have just been absolutely shattered in the best way possible. But I really, I really appreciated Holly what you said about material things because so much of the conversation around reparations is material reparations dollar amounts. So moving somebody to a neighborhood where the housing value the property value is higher where their property can can appreciate more, you know, fixing, you know, improving the quality of schools so that students can get into higher ranked institutions on the US, you know, news, world report, and then get into Titanic amounts of debt, you know, going to those. It's so much of it is about material outcomes but one thing that was interesting that I came across in my research was with regards to the the Chicago immigration scheme in response to the torture that was perpetrated at home square. One of the components was a public apology and not just that but you know institution of this episode in the in the school curriculum. And the idea that students were going to learn that this thing had happened. And that to me was fascinating because it wasn't. There wasn't a dollar amount attached to that it wasn't a financial component, but that was often the thing that people advocated for the hardest. It's not be forgotten was that people know that this thing happened. And I think that goes that goes partially towards the this this idea of, you know, virtues and what's good and what values we sort of we outsource I think, you know, it's, it's, it's very easy to think about, it's very easy to fall into the trap I think of thinking of material outcomes with regards to questions like this when I think one thing that we could that we could gear our consideration towards more is just simply knowledge like there's so much history that we did like particularly American history that we just that the vast majority of America that that's just not taught in schools, whether it's on a high school level whether it's on a college level whether it's at law school like there was stuff that I was finding in law school that would would have been very germane to how I grew up even, but I didn't find out until I was well into my twenties. And I think the sort of the, I guess you could almost call it the restoration of knowledge in many ways would be so beneficial you look at say for instance, you know the the fires that you know almost the seasonal apocalypse that that attacks California with these wildfires and you look at say for instance the institutional knowledge that various indigenous tribes had with regards to maintaining the forest and the utilization of controlled burns and all that knowledge, right, that could go so long towards, you know, the restoration or at least sort of arresting the progress of the more disastrous aspects of climate change in that particular environment like that I think it and it's, I don't even want to call it a knowledge economy because it's just like a dispersion of knowledge it's not knowledge in exchange for something else. I think that is that I think is an interesting aspect to inject into the discussion as well. I think these are really great responses to that to that great question, and I think that the politics of machine learning and what do we teach what are we teaching our these machines in the system will kind of data what are we feeding them. It's something we can turn on ourselves and ask what what are we what do we allow ourselves to know what kind of knowledge is is authorized and the sort of broader civilizational project is really about how do we keep and pass information beyond our own frail mortal bodies and transmit things to the generations. And it's a really an open question in some ways whether we're learning right, especially when you think about moral good and whether we're learning to be better people I hope that we are and I'm, you know, I think I'm fundamentally optimistic but it's a it's a tough one it's a really challenging and interesting question. I think we probably don't have time to take any more questions right now. So I'm going to just wrap up by by saying thank you to our panelists. It was a really wonderful conversation, and to acknowledge that the AI policy futures project was supported by the Hewlett Foundation and by Google. So thank you to to them for for being sponsors for this broader research effort that led to this conversation. So for our closing plug, please join us for the next future tense Wednesday conversation, a week from today, which will be titled, will we ever vote on our phone on our phones excuse me so if you'd like to find out the answer to that you should come next week. So thank you all so much this was fantastic and I'm looking forward to talking with you reading more of your amazing work and hopefully getting to hang out again some distant day in the future. This was so much fun. Thank you for having me. Thank you. This is amazing. Yeah, thank you again. I'm really glad that I got to talk to you guys and do this. Thank you all.