 All right, it is one o'clock, so we are going to get started. Hi, and welcome to New America for today's event. What sci-fi futures can and can't teach us about AI policy? I'm Kevin Bankston. I'm the director of New America's Open Technology Institute. And the co-lead of a project called AI Policy Futures that we're doing in conjunction with our friends at Arizona State University Center for Science and the Imagination. My co-lead in this project is standing right beside me. I'm the director of that center, and I'm going to let him tell you a little bit about our project before we get started with today's content. Thanks, Kevin. Thanks for joining us. So AI Policy Futures is a research effort to explore the relationship between science fiction around AI and the social imaginaries of AI, what those social imaginaries can teach us about real technology policy today. We seem to tell the same few stories about AI, and they're not very helpful. They're stories about killer robots or super intelligence, and we're talking about that and missing the boat on things like airplanes that are falling out of the sky and autonomous vehicles and all sorts of things that are in the very near future going to impact our lives in very powerful ways. So this project is going to create a taxonomy of different versions of AI, visions of AI in the global literature of science fiction, and see how we can apply that to commission original stories to be published in Slate that will explore real world, useful fictions about the near future of AI. This is supported by the Hewlett Foundation and Google. We're really delighted to be able to have this event and to continue this work with all of you. Thank you. Thanks, Ed. So I am the person who is kicking things off with a brief talk to answer the initial question why is science fiction so important that we're spending a half day at a think tank talking about it. And at this point it would be good for my slides to come on. Great. Well we're here because the imaginary futures of science fiction impact our real future much more than we probably realize. There is a powerful feedback loop between sci-fi and real world technical and tech policy innovation and if we don't stop and pay attention to it we can't harness it to help create better futures including better and more inclusive futures around AI. The paradigmatic example of the sci-fi feedback loop is probably space travel. When Jules Verne wrote From Earth to the Moon in 1865 and H.G. Wells wrote his 1901 novel The First Men in the Moon. They didn't quite know how we'd get people into space. One author shot his characters off into space with a big space bullet. The other made up a fictional anti-gravity mineral. They didn't know exactly how we'd get to space but the adventures that they wrote directly inspired a boy named Robert Goddard who grew into the man who launched the first chemical rocket in 1926. Just three years later Irman Obert who loved From Earth to the Moon as a boy so much that he memorized it word for word test-fired his own rocket with his assistant and fellow science fiction buff, Werner von Braun who would go on after an ignoble stint as a Nazi it should be noted to become director of NASA's Marshall Space Flight Center and chief architect of the Saturn V rocket that finally did carry men to the moon 104 years after Verne's book was published. He did that with the help of a younger generation that had been raised on Flash Gordon and Buck Rogers and the novels of people like Robert Heinlein. This is an actual recruiting ad for JPL Jet Propulsion laboratories at the time banking on nerds liking science fiction and what they did in turn inspired a whole new generation of more realistic science fiction about space travel like 2001 A Space Odyssey. In fact director Stanley Kubrick hired two of NASA's top scientists to spend two years designing his unprecedentedly realistic vision of the future. That team in turn consulted extensively with over 60 technology companies and researchers and experts with some even getting involved directly in the design. GE helped envision the space station and the lunar base. Bell Telephone contributed to the design of this video phone booth which was one of our first popular visions of video chat and IBM designed several of the spaceships control panels and this tablet computer that predated the iPad by nearly 50 years. Perhaps more on topic for today IBM also worked on the HAL 9000 computer as did real AI expert MIT's Marvin Minsky. In part because it was so well informed by real science and tech 2001 had a unique impact in terms of influencing our modern conceptions of space travel and personal computing and AI the feedback loop in action if you will. This is especially evident when you look at the influence of HAL 9000 whose capabilities basically set the research agenda for AI researchers in the following decades setting goal posts that have at this point mostly been achieved. Just like HAL AI today can play and beat us at games like chess recognize our voices and faces read lips understand and replicate human speech kill people and so much more. Indeed the fact of this feedback loop of sci-fi inspiring real tech inspiring sci-fi inspiring real tech is so well documented and established in the space and defense sectors that it's essentially been institutionalized with the DOD and DHS and NASA and the like regularly bringing in sci-fi writers as strategic foresight consultants holding sci-fi story contests or commissioning new stories from established writers to solicit fresh ideas about the future of conflict and even teaching some sci-fi in military academies. This cottage industry of sci-fi is futurism has in the past decade or two actually been spreading into the private sector with regular companies like Nike or Home Depot or Boeing working with sci-fi writers as consultants. Sometimes the influence of such writers in the real world has been dramatic and not necessarily for the best. For example in the early 80s two of the era's most popular and conservative sci-fi writers Larry Niven and Jerry Pornell organized the citizens advisory council on national space policy that directly prompted Ronald Reagan a science buff himself to launch the ill-fated and enormously expensive missile shield product known as the strategic defense initiative also mockingly known as Star Wars here is an editorial cartoon from the time. Sci-fi also impacted our emerging cybersecurity policy when war games the story of a teenager who hacks into a war gaming AI at NORAD and all this starts World War 3 and if you don't think that science fiction come at me totally freaked Reagan and Congress out led to a major revamp of how DoD handled its computer security and also prompted Congress to pass our first anti-hacking law that even showed clips from the movie in Judiciary Committee hearings which was crazy. Of course when talking about sci-fi's impact on policy we also can't skip issues of privacy and surveillance where science fiction, one book in particular has literally defined the debate for over half a century although Minority Report has given it a run for its money in recent years. These privacy dystopias are helpful because they provide us with what science fiction writer David Brin calls self-defeating prophecies. They serve as a warning for what we want to avoid. However, just as often Silicon Valley has read such dark science fiction visions as a design spec for their next big thing. In particular the near future information technology focused sci-fi genre of cyberpunk in the 80s and 90s exemplified by the classic hacker noir Neuromancer whose author William Gibson coined the term cyberspace has been deeply influential. One sci-fi writer who got his start around the same period, this dapper fellow, popular proto-cyberpunk author Neil Stevenson, has helpfully described sci-fi's influence as less of a direct one-to-one inspiration and more of an invisible magnetic field that roughly orients people's imaginations in the same technological direction giving them common ideas and language for communicating about what they are imagining. So you say you want to build something like a communicator from Star Trek or a tablet computer like in Star Trek The Next Generation or even an orbital space station like in Star Trek Deep Space Nine and everyone knows what you're talking about and can work together to move in that direction. Stevenson's own books serve as a great example of having had a very strong magnetic pull in Silicon Valley. The global VR network The Metaverse from his own cyberpunk classic 1992 Snow Crash was a key inspiration for real VR and AR tech both then and now. He's actually still the chief futurist at VR Startup Magic Leap and was a big inspiration for a generation of internet tech folks generally especially including Google Founders Larry Page and Sergey Brin. The more recent novel and very popular movie Ready Player One by Ernie Klein has played a similar magnetic role for VR today. Oculus founder Palmer Lucky actually used to hand it out to new employees to align them with his product vision. Similarly Steven's Cryptonomicon a story about cryptocurrencies and data havens was according to Peter Thiel required reading at the startup PayPal. Stevenson even played a role in the founding of Jeff Bezos's Space Launch Startup Blue Origin. The company was born in part out of Bezos's own lifelong love of sci-fi especially Star Trek and his conversations with his old Seattle buddy Neil Stevenson who he hired as the company's first employee. So these are just a few of the tech billionaires who do what they do because of the techno libertarian fantasies they read and watched when they were 12. You literally can't throw a rock in Silicon Valley or Seattle without hitting one of these billionaires who is a fan of sci-fi including this gentleman who is Wozniak the co-founder of Apple. And we still haven't talked about this guy the billionaire elephant in the room Elon Musk who directly attributes his entire life philosophy and mission to the sci-fi he read as a boy. Here's a tweet referring to Douglas Adams of Hitchhiker's Guide fame and Isaac Masimov of the foundation in iRobot. And it's Musk in particular who puts a fine point on I think both the good and the bad of sci-fi's influence on tech. Standing alone I think it's probably a good thing that sci-fi inspired this guy to dedicate his life and his billions to getting us off of fossil fuels and off of the planet. But as you may have noticed there's a certain homogeneity to the people I've mentioned so far. This feedback loop has been a somewhat closed loop that reflects the lack of inclusion in both the fields of tech and sci-fi. A series of privileged white men inspiring, sorry Siri, she doesn't like what I'm saying and she's trying to get in the way. A series of privileged white men influencing privileged white men with grand wish fulfillment narratives of great men changing the world through their inventions and their adventures. Too often missing from this feedback loop and relatedly from the field of AI has been a recognition of and an inclusion of and a centering of the lives and perspectives of people and not just individuals but communities who are not middle class cisgendered white men mostly from the American West Coast. Or put another way science has too often held up people like this guy I'm a Picard fan myself which is why I was so heartened to see Nora Jemison NK Jemison's historic victory last year at sci-fi's Oscars the Hugo Awards. Not only the first black author to win the best serial Hugo but now the first author ever to win it three years in a row for her broken earth trilogy. Side note I was equally heartened to see Malka Older who is with us here today nominated for the best series Hugo this year for her amazing infamocracy series. Miss Jemison has been a tireless voice for inclusion in sci-fi literature and sci-fi fandom and it's with a few inspirational words from her speech that I'm going to close this talk with words that could just as easily apply to the field of AI. I look to science fiction and fantasy as the aspirational drive of the zeitgeist. We creators are the engineers of possibility and as this genre finally however grudgingly acknowledges that the dreams of the marginalized matter and that all of us have a future, not just Elon Musk that's my addition not hers, so too will go the world soon I hope. So thank you all for coming we are going to start our first panel which I think is going to sound some of the same themes as my talk and so I'd like to introduce our panelists now to talk about AI policy in reality. So folks come on up. Where are our panelists? Okay then they will be coming out momentarily. I think I just finished my talk a little faster than expected. Pardon me folks they're coming. Okay great welcome friends hey gang's all here. I am not going to introduce these fine folk they are going to introduce themselves and what I'm asking them to do is introduce themselves in the usual way while also answering the question of what are the key questions around AI that are coming up in your field as you characterize your field or put another way what do you find yourself usually talking about when you are talking about AI. So let's start with you Ramon. My name is Dr. Ramon Choudhury I lead responsible AI which is our ethics and AI arm of Accenture massive global consulting firm so I actually talk about a lot of things when I talk about artificial intelligence but specifically how it impacts communities things like bias discrimination fairness but also how do we get the right kinds of narratives to build our tools and product as companies start to actually implement and enact artificial intelligence. When I say companies the interesting thing is I'm not actually talking about the Elon Musk's and Jeff Bezos's of the world talking about Nestle and Unilever and Coca-Cola so as the companies that are already in our daily lives adopt artificial intelligence what does that mean and how do we do it responsibly. My name is Miranda Bogan I'm a policy analyst at Upturn which is a non-profit based here in DC that promotes equity and justice in the design use and governance of digital technology and what that means is that we're looking at two main areas economic opportunity and criminal justice and so when we think about AI what we're often thinking about is scoring people you know how are people finding jobs credit housing how are people being rated on their risk of committing a crime things like that and while it often is talked about in terms of AI very rarely what we're actually seeing is AI we're still very early stages of like statistics but at the same time using the frame AI has gotten a whole range of new people interested in these sort of legacy issue areas of civil rights that weren't interested before because there's both it's kind of a sexy new thing but also that there's a new opportunity to make change and maybe we can break out of some of the policy patterns that we've done in the past. I'm Ilana Zaidi I'm a pulse fellow in AI law and policy at UCLA's school of law and there I also study automated decision making looking at it mostly from the realm of education technology throughout lifelong learning so I'm looking at scoring systems that are supposed to be scoring systems and how they structure human value human capital and affect human development and also in that vein things like efficiency productivity access to opportunity Hi and I'm Lindsay Shepard I'm an associate fellow at the Center for Strategic and International Studies we are a defense and security think tank here in DC there I focus primarily on emerging technology national security and defense issues so when I work on artificial intelligence primarily we're thinking about how do we dispel the myths how do we set expectations what does reasonable use and what does use actually look like and then how do you actually go about the process of bringing these technologies into our defense and intelligence structures so I would say kind of the big macro question that we focus on is not on the algorithms themselves so we look at the underdeveloped ecosystem surrounding the algorithms so looking at how do you bring in the right workforce how do you train your workforce how do you get the computing infrastructure and networking infrastructure and how do you have that top level policy guidance to actually bring this technology in to support US values and interests great so we're talking a lot about algorithmic decision making or we could also characterize that as narrow or artificial narrow intelligence as opposed to say artificial intelligence like Skynet or most things you see in science fiction we're talking about algorithms that are trained on sets of big data remember when we used to say big data all the time we don't say that anymore we say AI but often that data can reflect biases in our real society or can be biased data sets which leads to issues of algorithmic fairness which is the center of your work Alana in many ways so I was wondering if you could start by talking about what are the top issues around algorithmic bias as they apply to human potential generally so there are many ways bias can creep into algorithms that can come in from the data itself historical data that reflects patterns of inequity it can trickle into the models that are then used to judge people and it can trickle in terms of what I talk about on a day to day basis into the technologies that are then used to determine where people should be in life what level they should be at in school we're looking increasingly at the idea of completely automated personalized teaching system so what you should learn what level you should be at and recommended systems where should you go to college what should your major be what should your professional development be and then it moves into the hiring realm so in this way you get because you're using predictive analytics you're really replicating existing patterns and the question is do we want to do that in human development and in places where opportunity at least is the rhetoric that we use Miranda you often address these issues in the context on a variety of issues but especially in the context of criminal justice can you talk a bit about that? Yeah I think that's another place where some of the sci-fi tropes honestly I think have inspired what we're seeing in criminal justice RoboCop type of things but I think we'll talk more about sci-fi but I do think that has motivated things because we see body worn cameras we see the vendors who are building those cameras thinking about how to incorporate facial recognition which I'd say is one of the closest things to AI that I actually see on a day to day basis is the amount of data that a human mind maybe can't draw connections to but with enough data you can theoretically if it's accurate the other thing we're seeing is in the criminal justice system deciding who can be released on bail or not where police should be deployed sometimes this is justified as making the system more fair with the idea that if we're relying on data we're picking out human biases in other cases it's that there's a limitation in resources and so by using data we can more efficiently deploy resources but I think it's the exact same problem as in the opportunity space which I kind of straddle all of that data especially in the criminal justice system especially in the US is so tainted with our own history you know if you're looking at where police ought to go based on where they've gone in the past where did they go in the past where they thought crime was going to be which was based on their stereotypes of which neighborhoods were going to be bad neighborhoods so pretending and I think a lot of the technologists building these tools either pretend or really truly believe that there's ground truth out there that they can just vacuum up and then turn into a predictive model if we rely on that data as if it's reality we're going to again be not only replicating the past but entrenching it because if it ends up in these systems and then the systems get more complicated so what we think of when we mean AI it will be harder and harder for us to actually change that in the future and I think that's one of the big risks when we're talking about bias kind of creeping in yeah so like over policing of communities of color for example that data then feeds into these processes that results in more over policing of communities of color and on and on and on and it just augments because it says you know go back to this neighborhood oh there was more police activity in this neighborhood last week clearly that means there was more crime it doesn't and that's what the system can kind of interpret that's the only data it has and so then more police will go back the following week and they never collect data on those neighborhoods where they didn't go and so they don't there could be crime happening in other neighborhoods there could be reason to be you know dealing with the community out there but if they're if they're relying on data that's steering them in a certain direction you get into a feedback loop that prevents the system from ever learning that there are other examples do you have opinions in this area I have many thoughts so just to frame what my two colleagues just talked about from a bias perspective as a data science I'm a data scientist by background I'm also a social scientist by background so when I when I give this talk in let's say Silicon Valley I highlight the fact that when we talk about bias is actually a lost in translation moment that happens when data scientists talk about bias we talk about quantifiable bias that is a result of let's say incomplete or incorrect data there could be a measurement bias this could be maybe a design bias a collection bias if you've ever like taken a survey if you ask people whether or not they voted in the last election there's some incorrectness to it and data scientists love living in that world it's very comfortable why because once it's quantified if you can point out the error you just fix the error you put more black faces in your facial recognition technology what this does not ask is should you have built the facial recognition technology in the first place non-data scientists talk about bias we talk about isms racism sexism etc so interestingly we'll have this moment where data scientists will say you can't get rid of bias and what they actually mean is when we build models it is literally like an airplane model it is a representation of the real world it will never be perfect it should not be perfect that's what a data scientist means what a lay person hears is I am not going to bother to get rid of the isms so like that is a representation that my group tries to bridge so when we build things like eccentric fairness tool etc to the point of my colleagues there's a context to it that's absolutely critical and important and it is bridging that lexicon between what we mean in society and what we mean quantitatively that's absolutely critical so y'all have mentioned facial recognition which is a type of artificial intelligence or applied machine learning based technology that has been a very hot policy topic not only for privacy reasons but because of ism reasons anyone want to talk about what the state of the debate is there and what people are talking about when they're talking about bias and facial recognition? Sure I mean I can kick it off yeah well it's been an evolving narrative I think the initial narrative was about well this is Joy Bulimini to McGaber's work about there's not enough diversity in these data sets what gender shades showed was that facial recognition is about 98% accurate for white men only about 60 something percent accurate for darker skin African-American women clearly showing this gap which was a function of lack of diversity in data set the narrative now is more about application and again creating a more diverse data set so then police can go to last minority children is not necessarily where the AI ethics space wants to be showing so there's actually a number of bills about banning facial recognition the I think one of the most prominent debates was in the state of Washington there's actually a bill in Oakland San Francisco and other you know I'm not going to be able to long list all of them so to your point Kevin it has been the issue that from a legislative perspective but also from a human psyche perspective we've latched on to the most and I think because it's related to these sci-fi narratives that were so we all know the story of minority report so it's much easier as a person who works in the AI ethics space to be able to talk that talk I don't have to explain what facial recognition is people may not necessarily know how it works and there's a lot of gaps to fill about actually how inaccurate it is in general but people will understand the general narrative enough to know where the problems may come from and this is where you know having this common watched like science fiction lexicon is quite helpful but I think you know facial recognition is not just a problem in the criminal justice context that's the most frequent one we hear about but facial recognition and facial analysis are both popping up in so many other contexts there are tools out there that are being used to help interview people that are using facial analysis to try to map whether people are qualified for position and the people building those tools are doing interesting things to test for fairness does that justify the you know use of collection of your face to try and map on to this thing that shouldn't necessarily have to do with how your face moves and to your point it's like building on it's not just also the field of effective computing which essentially puts all of human emotion into about six buckets so everything about who we are and what we feel falls into like six buckets I think the most recent research was showing that black men were more likely to be angry with a neutral face than white faces so we're really pretty far behind but to your point like that's being used to make hiring decisions so while we can latch on to this narrative of like we understand the minority report catching criminals oh that might be bad there are all these ways it's creeping into our daily lives and the thing is from a business perspective it's always sold as this efficiencies gain it is a product you sell to help people do their job and the reason why it often goes under the radar is that it is sold as a tech deployment so it is not sold and has to go under like have to be reviewed by city council or you know these different groups if you were to try to sell a team of people to monitor and predict policing that may actually have to undergo like city council etc if I sell you a tech deployment I am actually under vendor licenses I actually may not actually have to go through the same channels and this is where things are sort of being deployed and we find out later and are like what the heck how come nobody knew so we are entering a phase where we don't have a bunch of crazy robots or mega intelligences wandering around but we do have this mesh of algorithms in the background of our lives doing things often shaping what we see online which Miranda was a subject of some research she did could you briefly talk about that and then we'll move on to some other issues sure so a lot of people have heard the controversy around employers maliciously or in a discriminatory manner targeting ads online for housing for jobs for credit saying don't show this job for housing to black people that's a big problem there have been lots of collaborations lots of meetings lots of lawsuits about dealing with that what we were looking at was what's going on in the background so let's say I was running an ad for a job and I really wanted to reach everyone I wanted to have the opportunity to work for my organization so I posted my ad online on Facebook was where we tested it and I said send it out and what we found was when we did that we said anyone in the United States could see this or was anyone in Georgia North Carolina I'm sorry but what we found was that the algorithm that was deciding who sees what ad was making its own determinations about who should see which job who should see which housing opportunity so I think we found that lumberjack jobs were being shown to 90% white men taxi jobs on the other hand were being shown to about 70% African American users and this was without us telling the system who we wanted to see we were trying not to discriminate but the system was learning from past behavior of users what they were most likely to engage in what they were most likely to click on what people like them were most likely to engage in on and it was using that to show those people what it thought they wanted to see what was going to be most interesting to them or what they were most likely to click on we were looking at it in terms of ads in terms of jobs and housing but this has come up in the past as well with like filter bubbles are we only seeing news that we want to read because algorithms are deciding that that's what we're most interested in and so we should see more of that and I think that is similar to facial recognition when we're talking about AI that's a use case where we're talking about hundreds of thousands of pieces of data that are going into deciding what should be shown to you and on Facebook or on Google and that's like closer to AI the closest AI that I get to compared to like say criminal justice context like pre-trial risk assessment or who could be released on bail when people say AI in the courtroom is going to decide who's released on a bail often what they're talking about is like a numerical model that's scoring people on a score of one to six which is not really super highly complex math but these other these sort of online systems that are learning from people as they interact with information are closer to that and it's really shaping what opportunities people have access to exactly what you were talking about. Yeah and following on that I often think of my job as scaring people and then hopefully making them act on the basis of that fear and what you were saying in terms of the scoring systems they're in the background they're not often visible in the way it would be in like a criminal justice system in explicit decision making mode and so I often use sci-fi as my references to help people understand nose-dive from Black Mirror is the one that seems to chime with people the most but minority report gattica even sort of in previews brave new world I say these things and people grasp the weight of what I'm talking about in a way that is different than if you just seemingly talk about what seems like an administrative tool and is often acquired you know as an administrative tool I think anytime you hear the word personalized this is a personalized job board it's a personalized news service what I hear is stereotype it's what I know you it knows what type of person you look like In the realm of the content we see there's also emerging AI that is going to be used to deceive us in a variety of ways we've now seen deep bakes which is basically creating a using AI a video image of someone saying something they never saw there was also this amazing thing if you didn't see it open AI group that Elon Musk amongst others founded they came up with an algorithm called GP2 that was trained on 40 gigabytes of internet text to predict the next word if you gave it a word and so then they started feeding headlines into this thing to see if it could write a news story and my favorite one was they wrote a headline about scientists discovering a tribe of unicorns in the Andes that spoke English and it wrote something that read like a human road and so just imagine armies of these things just spewing out propaganda BS which gets us closer to the realm of geopolitical conflict which is Lindsay's bag and so I'm wondering if you could talk a bit about the role that AI is starting to play in the realm of international conflict and international sort of geopolitics. Absolutely so this is a great example that illustrates kind of the broader trend that artificial intelligence is living in so we are at a time where we have the democratization of software and the commoditization of key priority technologies so this means that more people more countries more non-state actors now have access to highly capable diverse robust portfolios than they ever did before and we the US are quite used to kind of being that capability provider and increasingly other countries other actors don't have to work with us because of this kind of global trend of ease of access highly capable low cost capability and so that really brings us back to this question of is there an AI arms race and it's often framed in the context of are we winning versus China, how are we doing, are we falling behind, what is going on and you have to kind of understand the way in which entities apply artificial intelligence or data analytics you apply them to achieve your goals and accomplish your needs and support your values so the way in which for example China applies AI and facial recognition and the abhorrent human rights abuses should not and will not look like the way that the US applies AI because those fundamental value structures are different so when we think about, well I mean it has been a little depressing but I tell myself those fundamental value structures are different so when we think about who is going to win the race, the race is going to be won by the countries that figure out how do we make AI work for us how do we use AI and data driven techniques and this new portfolio of highly capable easily access technology work for us and that's going to be the country or this entities that win the race. If we want to really pick apart how are we doing versus China, we are still leading the way in research and development and innovation within the United States and I think there is a certain emulation of our model that permeates across the globe but we're really falling behind on the deployment and that's where a lot of the narrative of we're falling behind China, we're falling behind these authoritarian regimes that are figuring out how to make AI work for them, we're not thinking well about how do we actually take the technology lead in research development and innovation and how do we deploy it in ways that supports our ethical and normative values and so I think conversations like this thinking about this is a highly capable system, how do we make it work for us. So I'm glad you brought up ethics. We're going to spend the next excuse me a few minutes talking about now that we've set out some of the issues what are the sort of policy interventions we're seeing and I'd say we're seeing sort of self-regulation to some extent usually under the frame of AI ethics or AI fairness and then some interesting legislative and regulatory moves but Ramon you do AI ethics what the hell are we talking about when we're talking about AI ethics. Yeah so I have actually a lot of thoughts on the statement you made so first I actually have serious problems with framing it as the AI arms race number one if we're going to talk about the inclusion of diverse narratives framing everything in terms of a war like patriarchal structure of a zero sum game is literally the worst way and the least inclusive way to talk about the use of a technology so if we are by naming it that way setting it up to be a combative and be some winner and leader which sets up the hero narrative that we were just talking about is problematic so even in that name we have set this up to be patriarchal and war like so I actually don't like to refer to it as an arms race and actually interestingly have been talking to some folks who want to frame the discussion more as like the space race about creating like the International Space Station et cetera something more collaborative because it's not as if we're all in the same you know we're all just going to be fighting each other over values that is a framed narrative and the other thing I may actually take issue with you on is you know to the average citizen in China the use of artificial intelligence deployment has been fabulous we like to harp on their treatment of the Uighurs that is a small minority group now if we were to take that same narrative and flip it on the US some of our deployments have been no different we should point the finger at ourselves at other counterparts if you want to look at India's other system and you know the exclusion of lower caste groups certain and it's by design it is to fulfill a political internal political design so I don't think we should sit on a high horse and act as if our values are better or that we're going to do it better because when we take the AI arms race narrative and we talk about it in Silicon Valley the concern is not so much oh how do we do it in like in a way that's better or more ethical it's actually shit China's beating us how do we get there faster so no one's even thinking about because we have the arms race narrative pushes this narrative of running faster we don't like much like we did with the nuclear arms don't actually bother to stop and think what should we be doing because we're so busy looking at the other guy quote beating us and the problem with the beating us part is the other the opponent or imaginary opponent has shaped the narrative and the metrics for us so it's hard to actually if we are going to have a values alliance system it's harder for us to adhere to our values if someone else is defining what the race is all about okay because we're going to have to adhere to their metrics to get there so that's my spiel but when we talk about ethics when we talk about I'm glad so to talk about and this is it's such a complex issue because this is actually you know a global issue I mean really just reminding us that borders and states and boundaries are artificial constructs of politics right like that is the number one thing working in AI reminds you so if you think about a law like GDPR general data protection regulation you know it transcends borders and boundaries and that's why it's actually impactful if it were just focused on the EU it would not actually have the level of impact it does on tech companies so when we think about fairness ethics etc it needs to actually transcend borders and think more about communities and groups and narratives that can filter upwards and the difficulty has been and this is sort of why I take issue with this top down framing so much of what we talk about is about governance governing whether it's systems or you know how do we create sets of values and that needs to by design be inclusive and what we have not actually figured out is how do we understand what ethics means to all the different impacted groups because you know who does Gmail impact I don't like every everyone great now let's get the diverse perspectives to figure out what the ethical framework is for that well good luck so it's a tough not to crack but it has to do with the fact that all these technologies these companies transcends border and boundaries and they impact literally every community out there so that all sends very straight forward and it will get solved by ethics boards right Miranda ethics boards so I think the problem with the framing of ethics and how I hear it around ethics boards but also just in general about we need to make our AI ethical and how are we going to do that all of that presumes that at some point we're going to come to an agreement or a consensus on what ethics are and have we ever done that in our society no we've been struggling over that for the history of not only our country but the entire world and you know the history of humanity all of humanity and that's what societies have been structured around is struggling over those values and structures of governance and ethics and so I think what's really important here is to set up structures such that whatever we build in today is malleable so that if our values change in society we can ensure that the tools that we've implemented to fulfill those values are also changed like if we had had the technical capability to build AI systems a hundred years ago what would our society look like today it's super frightening and so I think you know boards and things like that they're not so useful in the sense that they're going to come up with a solution they do need to come up with mechanisms so that people are thinking about these systems in an ongoing way over time but not only you know the privileged sort of high level people who are in those boardrooms how are they talking to the people who are not only using the technology but affected by it as Raman said so after the what we're sort of all laughing about and referring to if you're not familiar with the Google ATAC board issue that happened in April and what happened was there was a lot of pushback from the academic and activist community that led to the board being disbanded interestingly in the AI ethics space we have these unique roles of industry ethicists people like myself and my counterparts and other companies that's kind of a new thing and for those of us in these jobs what I pulled together is a medium article where we talked about you know essentially what Silicon Valley is now quote disrupting democracy that's actually what they're trying to do they're trying to create these democratic systems but they're doing it in the way only Silicon Valley knows how which is very problematic so what that medium article is about is actually fielding all the industry ethicists who are able to contribute and some thoughts on how we believe we can govern the use of these AI systems in an ethical way some have suggested that these boards are attempts at ethics washing sort of giving the appearance of some sort of self-regulation but really as a way of forestalling actual regulation that said there are some ideas around actual legislation on the table particularly coming into the context of the debate over any privacy legislation I was wondering if anyone could or would speak to I've been in the privacy space which is how I got into the data space which is how I got into the AI space for a little while now and I'm amazed at the legislation we're seeing and the conversation around it last week there was a horde of privacy professionals in town and for the first time I heard people talking realistically about the idea of legislation that would take into account intangible privacy harm so not just an economic harm which is what you usually need for a law like that to work and talking about it as imminent in some way shape or form I think that's remarkable and it shows we've come a long way and that there seems to be an agreement that privacy is no longer the really classical idea of notice and consent that people do not read terms of service and I think increasingly which is something I've argued that they don't have a lot of choice or alternatives in terms of many mainstream tools so expecting people to opt out from that is a poor way to ensure privacy I mean I think the reason people are paying more attention to privacy now is we're realizing what can be done with our data it's not just a theoretical your data is being collected and maybe it will leak and someone will steal it and then they'll steal your credit card it's being used to make decisions it's being used to shape your information environment and I think that's what's instigating a lot more attention from the hill at the moment and why people are focusing on privacy as the remedy there's also another intervention that was introduced recently called the algorithmic accountability act which is intending to compel companies or entities that are building predictive systems to check those systems beforehand for their impact to check them for bias or discrimination or other types of harms and I think that's interesting because what it's trying to do is get people slow down you know don't go full speed ahead try and think before you act there's still a lot of questions in that proposal like who gets to and you know I think they envision the Federal Trade Commission enforcing that and creating rules around it but who gets to see those impact assessments do companies really have to do anything if they find some kind of harm who's defining like how much harm would make them you know have to change their models but what I think is interesting there is again the incentive to move to artificial intelligence or machine learning is often remove the friction you know make everything more efficient and easy and I think the reason we have laws and especially the reason we have civil rights laws which is what I mostly focus on is because pure efficiency led to an awful lot of bad outcomes and so there's a reason to slow down there's a reason to not be efficient there's a reason to not be hyper-personalized because if we do that we're catering to like only a certain part of society that can you know take advantage of that ease whereas other people can't and so I think those types of proposals are forcing us to not be as efficient while I think businesses don't like them and we still don't know what they look like there's a purpose for that type of intervention Moving on to the question of this event what can sci-fi teach us or not about AI policy I'm curious for y'all's takes on how AI in sci-fi has been helpful or hurtful to the discourse around AI in policy or helpful or hurtful to your attempts to engage in that discourse personally I already flag what my pet peeve is which is AI has conditioned us to worry more about Skynet and less about housing discrimination and I often think that Kafka is actually our best representation of AI in the sense of his books are all about baseless bureaucratic systems that don't make sense and control your life but I'm curious what y'all think in engaging with policy makers particularly in the national security space the equation of consciousness or sentience with intelligence or replicating intelligent function prevents us from having an honest conversation about when and where and how do you best use these systems to think about it as a conscious being versus an algorithm and all of the problems that we're talking about like that really masks the ability to come into your problem area and to have an honest conversation about what are the true pitfalls, what are the true benefits and how do we actually bring artificial intelligence or machine learning or computer vision into a workflow so for me I gave you some of my touch points a little earlier but for me the anthropomorphizing of technology is a real issue I talk about education technology, people often think about replacing teachers and the idea of robot teachers and they picture the Jetsons for those of you who may be old enough to know that a robot in the front of the classroom talking and there are things that can automate instruction right now that don't look like that that are simply a platform and yet they have the same sort of impact that putting a teacher in the front of the room would have in terms of what students learn and how they advance. I also think that the all or nothing aspect of a lot of science fiction is it impedes some of the conversations so for reasons that make sense, most science fiction is said once this technology has been developed and been deployed, they don't see it developing, they don't see it being adopted ad hoc, they don't see it messing up and every single technology that I have ever used has messed up at some point and I don't think that our narratives account for that in the way that even accountability is, forget something is sophisticated as bias, what about typos? For me it's two sides of a coin, one is that I think sci-fi has helped journalists frame old questions in new ways like back to the criminal justice context, if we're talking about robots in the courtroom or minority report, that gives people an immediate frame of reference that something they thought they knew was happening is changing and it's changing because of technology and it's worth paying attention to so I think that has, as I mentioned earlier, kind of broadened the community of people that are interested in these issues just last week or two weeks ago, the partnership on AI which is one of the self-governing entities that's been created in recent years and think about some of these issues, released a report about pre-trial risk assessment about using AI in the courtroom coming out and saying this technology is not ready yet nor, and we should consider whether, I believe they said whether it ever ought to be, but that there's many open questions and some severe limitations to using this technology, that's a totally different stakeholder group than has been involved in the criminal justice context for quite some time and it lends some credibility to have the technologist saying, you know, we know what's going on here and we can't build this yet and you don't want us to build this, so that's interesting. On the other hand when the media frames some of these kind of news stories using a sci-fi trope, people can presume that they understand what's happening when in fact it's a complete overblown perspective of what's happening. So for instance if we're talking about social credit scoring in China, I think the nosedive episode, so Black Mirror, the episode where we're talking about everyone scoring, every interaction that they have and you have a score that you walk around with and that determines what you have access to, people have that vision when they think of what China's doing and that's just not the case, they're much more rudimentary still working on sort of patchworks of black lists that are based in their value system and so it's not as jarring to the mainstream Chinese society as we imagine it would be because a lot of us have this frame of a pop culture example of what a social credit scoring system looks like and so it kind of redirects energy where maybe that energy could be used in coming up with different solutions or thinking about how to prevent what's actually going on here in this country that we ought to care about because we're distracted by this frame that we think we're familiar with. And Ramon and then we'll get a Q&A. Sure I love everyone's points that I've made, I wholeheartedly agree especially with the anthropomorphizing one, it's extremely problematic. I guess the one that I would raise is a problem I see in Silicon Valley a lot a fundamental belief in maybe the tech industry at whole but definitely in Silicon Valley which is driven by some of this literature is that the human condition is flawed and that technology will save us. And this is the obsession behind having microchips in our brain so that we have perfect memories. But we don't want perfect memories because there are people who... But there are people who actually are alive who have a condition where they vividly remember everything that ever happened and they live in constant trauma. Imagine being able to relive your parent dying with the same level of intensity you did when they actually died. We are meant to forget things. So I think there is not because of this notion that technology will perfect us or fix us in a way in which the humanity is weak and flawed is problematic because when we try to create artificial intelligence we don't create it around human beings. We retrofit human beings to the technology and especially living in a world of limited technology. Technology that is not quite where it should be in the stories but as Alana very accurately said is maybe 30% of the way there. We actually try to force ourselves to fit the limitations of the technology rather than appreciating that maybe we are the paradigm to which technology should fit. Can I add one more thing Kevin? I think the other thing is even when we're reading sci-fi that's intended to be dystopian and we're intended to read it or watch it as being dystopian it's a culturing us to the idea often of this constant surveillance that in order for the technology to work that's in that story that data is needed and that's just inevitable. If I'm wrong I think we're getting used to that idea and that's what we're seeing today in the pushback to facial recognition. There's just not enough people pushing back to facial recognition because we recognize that. It's something that's inevitably coming maybe it will be bad but it's going to come and I think that's something to think about as well even if it's clear that the story is going south because of that surveillance. So we don't have a whole lot of time for Q&A because we're jamming a lot of content in today so we don't have a whole lot of time for a few ground rules. Questions in the form of a question keep them brief. Answers responsive to the question keep them brief. Hands raised. Yes ma'am please wait for the mic to come to you. She's bringing you a mic. Sorry real quick. The conversation has two sides in a way. One is government issues and we have you know civil rights kind of protections against that. Other is private sector kind of intrusions against privacy and surveillance and things of that nature. Putting aside the government side for the moment and where on the commercial side do we have options to do pushback from a legal action perspective. Are there causes of action and you know just a final footnote and yet we all go out and buy a lexin install it all over our house and leave it on voluntarily. But yeah I'm interested in the privates against as you know the new phrase surveillance capitalism. So what we're seeing and actually Miranda mentioned the HUD so what government is trying to do now is how can we take existing law and existing protections and apply it to new settings. It's a bit of an uncharted space because as Miranda said you can put an ad out in good faith and then the algorithm is making decisions based on how it was trained. You may not even realize that it's being deployed in a biased manner. So we had to come to that realization that that happened and then be able to figure out this you know what is the angle and what we're seeing in a lot of the and you want to sort of separate government you kind of can't in this. With the UK Group B ICO information commissioner's and the FTC and some of the language of the bills we see like latching onto the notion of protected classes. So what are the groups that are already protected and then how can existing law sort of be leveraged to further and that's a starting point for then starting to build further protections. And to your point about Alexa you raise an issue in the AI ethics space which is what do we have to offer. Technology companies have nice shiny gadgets the ability to look cooler than the Joneses or whatever. They offer you incremental ease. What do we offer? We offer scare narratives we offer from in our space we actually have to figure out and yes the notion of liberties freedom and protection is less tangible than a shiny new watch unfortunately so what can we as the AI ethics space offer people that can combat this narrative that tech companies have honed so well I'd like to fit in. Let's keep moving. Questions that gentleman in the back. Hi thank you all very very much for your conversation today was really great. My question is for Ramon. You mentioned the translation problem between different communities about bias. But I wanted to kind of dig down a little bit on that and maybe challenge it a bit and ask you is there not a space in which some of us might mean we can't remove bias because we're not talking about isms but we're talking about the foundations of isms. Oh I like that. There's an entire narrative now about we really shouldn't be thinking about fairness which thinks about justice. We shouldn't talk about bias we should talk about pain and harm. So absolutely and I think Miranda raises really well that this is not going to be a solved space and I think we just all have to get comfortable and it's funny because in industry we've been saying like change the new norm and everything's going to be just been like a narrative for years when talking about technology and I think we will actually have to grapple with the fact that change like we will just be living in a space of constant change growth and evolution so absolutely you're totally right. One more quick question. Which I will not answer. Hands. This gentleman right there to what extent is what's imaginable in AI ethics function of the imperative scalability that venture capital funding AI development demands and thinking of the scalability of returns on investment scale free versus concentration of capital. So I thought a lot about that in terms of the practicality of being able to implement accountability explainability transparency ethical models algorithmic impact models. When you have a profit and people inside these companies some of whom are not evil actually but there's an imperative there's a commercial imperative to especially for the publicly owned companies they need to produce profit for their shareholders and when that is the ultimate bar and when those results are scrutinized incredibly carefully I think it leaves companies in a very difficult position to be able to slow down and increase friction and be thoughtful about implementation because they all seem to be racing against each other. But I think there's some really interesting examples and to your question earlier of how do we push back against corporations that are doing this we're doing that. The advocacy community is learning how to advocate to the tech companies using shareholder action using public campaigns using directed research to say here's your problem and here's how you can fix it and I think especially as the companies the people building this technology are creating these systems that are making really important decisions in people's lives that may fit with the law that may not we I think can come to expect those actors to also be playing a role of governance that we have the responsibility to pay attention to in that way and to tell them what we expect as the public what we expect them to do and what we expect them not to do and to get other people to appreciate that fact as well. Well that's a nice closing note of agency and hope so please thank the panelists. Thank you. And now I'm going to take the opportunity to introduce our next solo speaker, Dr. Conta de Hall who is the postdoctoral research associate on the AI narratives project one of the project leads on the global AI narratives project and the project development lead on the decolonizing AI project all of which are at the lever whom center for the future of intelligence at Cambridge University. Across those projects she explores how fictional and non-fictional stories shape the development and public understanding of AI including exploring the range of hopes and fears about AI that are reflected in our stories about AI which is what we're going to talk about today so Conta if you could please come up. Thank you Kevin. So this is a work that I've been doing since around 2017 when with my co-author Stephen Cave when we came up with the idea to write a short paper categorizing trying to make some sense of those many narratives that we have around artificial intelligence and see if we could divide them up into different hopes and different fears and two years later we've looked at 360 books, films, TV series and other narratives in English from the 20th and 21st centuries so that's both a caveat but also an explanation of the scope of our research. We found that many works to look at just within these parameters and that work was inspired by the fact that as you just heard on the panel that prospect of sharing our lives with intelligent machines somehow provokes people to imaginative extremes so thinking about them seems to make people either wildly optimistic or melodramatically pessimistic and the optimists believed that AI will solve many if not all of our society's problems so you may have heard of the London based AI company DeepMind who created AlphaGo, the Go Playing AI system and they sometimes use the slogan solve intelligence and then use that to solve everything else but then there's the pessimists who fear that AI in its many forms will inevitably bring about humanity's downfall and that's a theme that's very frequently picked up by the media. So we were with our centre involved with a report on AI by the UK's parliament and when that report was published a UK tabloid called The Sun covered that with the headline Lies of the Machines, Boffins, that's a British word used to describe academics, urged to prevent fibbing robots from staging terminator style apocalypse and those stories matter, those stories influence the development of the technology itself they influence public fears and expectations and they can influence policy makers so our research aims to explain the structure of our stories around AI and why they have such a grip on our imagination and we hope that that's the first step towards having more diverse and constructive responses to the prospects of intelligent machines so why is it that the idea of intelligent machines and especially human like artificial intelligence fascinates people so much and for so long I mean there have been narratives around intelligent machines for 3,000 years and serious attempts to explain that fascination started just about a century ago with Sigmund Freud who was one of the first thinkers actually to ask why we find such machines so fascinating and he focuses on how we find them uncanny and that's that creepy feeling of seeing an imagined fear become real and he suggested that one reason for why we find especially human like AI and androids so unsettling is because they leave us uncertain about just what we are looking at in terms of reality being not what it seems in terms of not being sure whether it's dead or alive but also that was somehow being perceived so he discusses a short story from 1816 by E.T.A. Hoffman called The Sandman in which the protagonist Nathaniel the man in yellow is bewitched by the beauty of a woman called Olympia the one in the very extravagant dress and after much much it turns out that Olympia is an automaton which makes you wonder how closely this guy was looking at her from so up close and then when Nathaniel discovers that it drives him to madness and that kind of unease still feeds into contemporary ideas about what artificial intelligence could look like like the 1975 film The Stepward Lives which was remade as a comedy for those who don't know it in that film the men folk of a small US town replace their much too human women with what they consider to be perfect android wives so they look identical to their original wives but then presumably they bake better muffins recently historians who have been exploring the fascination with human like machines are focused on what they call the liminal quality so the boundary challenging and transgressing quality of those machines so we tend to divide up the world very neatly into living things plants and animals and not living things hammers and nails but AI don't seem to be somewhere in between those two because like non-living things they are built by humans from inanimate components metal and plastic but like living things especially those intelligent androids that we imagine can speak and sing sometimes walk and so on and that category defying element of AI I think is an important part of why we find them fascinating if you've seen any of the famous YouTube videos of Boston Dynamics and their four-legged robots you'll understand that they are captivatingly and slightly disturbingly both like and unlike living machines living creatures but we also think that there's more to be said about why we find the idea of such a machine so provocative and the starting point on that is that AI is a tool it's a piece of technology designed to help us achieve our goals but it's also supposed to be an intelligent tool so a tool with attributes that we would normally associate with humans a tool that's autonomous that can think that has goals perhaps even you'd call a mind and that makes it very different to ordinary tools and that's what has such huge implications because those attributes are what promise to make AI the ultimate tool the ultimate technology it's in a sense not just a tool but in all the many ways in which AI will be able to be deployed it's seen as the master tool so the deep mind can solve intelligence then use that to solve everything else whereas the thinking power of humans is limited by our cranial capacity AI does seem potentially limitless and promises to work out solutions to all our problems so it represents the apotheosis of the technological dream the dream that we've been having for technology ever since someone clever rubbed some sticks together that we can use tools to create a better world a paradise on earth so that's the source of our extravagant hopes but at the same time there's the idea of creating tools with minds of their own that creates to our minds inherent instabilities because a tool with goals could have goals that misalign with ours a smart machine could outsmart us a machine with autonomy could choose to disobey and that instability is the source of our extravagant fears so we are in our research that those hopes and fears go together we've analysed works that include both fiction and what we call speculative nonfiction so nonfiction that explores the future and on the basis of that we identified four dichotomies that structure our affective responses to intelligent machines so they each consist of a hope and a corresponding fear so first let's look at the hopes in detail the first one concerns life the pursuit of health and longevity is humans most basic drive I mean it is the precondition for almost anything else that you might want to do so consequently humans have always used technology to try to extend our lives so it's no surprise that for AI the hope is that it will do that in the way of giving us better diagnoses personalised medicine and so on and the most ardent advocates of AI's potential in this field suggest that it would make us somehow entirely immune to ageing and disease and to allow us to become what's sometimes called medically immortal but that's not real immortality because it still relies on having this human body which is in all kinds of ways messy and unreliable and can be hit by cars so many humans go even further suggesting that we can actually transcend the body all together and upload our minds into cyberspace now the second hope concerns time so assuming we manage to stay alive for as long as we wish then we hope to be able to use all that time as we wish so that's the dream of AI freeing us from the burden of work that we don't want to do so no more mind numbing days filling in Excel spreadsheets behind your desks because the AI will do all that and will live in smart homes that do all the laundry folding for us and will be lords and ladies of those AI manners and AI offers us such a life of luxury and ease and potentially could do so without having that very complex social and psychological pressures of having human servants so humans that you use to do the dirty work for you the third hope concerns desire once we have time we want to fill it with all the things that bring us pleasure so just as AI promises to automate work it promises to automate and uncomplicates the fulfillment of every desire it could be the perfect friend always there always ready to listen never demanding anything in return and in imaginings of AI there's loads of examples starting with Isaac Asimov's very first robot story Robbie about a robot nanny to the operating system Samantha in the film her and of course many hope that intelligent androids will be the perfect lovers as we saw in for instance Westworld until that went wrong and finally the fourth hope concerns power so once humans have created that paradise in which we have life and time and all our desires are fulfilled we'd want to protect that and I might add that humans have a habit not just of fighting to protect their favoured way of life but also to force it on others and in an AI context the culture novels of Ian M. Banks presents such a view and so stories of what we call intelligent autonomous weapons are ancient they go back at least to ancient Greece to the bronze giant Talos who defended the island of Crete from pirates and invaders by throwing boulders at them and you have stories of bronze knights guarding secret passageways all the way through the Middle Ages and then in modern times of course much of the funding for AI research has come directly from the military so as the master technology AI is also potentially the ultimate weapon so that's for utopian visions that those hopes reflect but they are just inherently unstable the conditions for each hope to be fulfilled bring about that potential for that utopia to collapse into a dystopia and one factor in particular is key to that balance between hope and fear and where it tips over and that is control so the extent to which humans are in control of the AI rather than AI being in control of the humans determines whether we considered a future prospect utopian or dystopian so on the dystopian side on the subject of life while people hope to achieve immortality its flip side is losing our humanity in the process because what are we willing to sacrifice in order to live forever our memories as the pan just discussed our emotions our physical form our individuality and embodiment so this is a ship of thesis question if you replace all your bits with metal prostheses is the resulting immortal being still you and how much humanity will be left when you turn yourself into pure data and upload yourself to one of those server farms in Arizona and our hopes for having more time can turn into fears of obsolescence and we lose control of the amount of leisure time that we have so at the same time as we dream of being free from work there's this terrifying idea of being put out of work because of course work doesn't only provide an income but also a role in society status standing a feeling of accomplishment pride and purpose so a UK paper had the headline last year robots are the ultimate job stealers blame them not immigrants not sure if that's so helpful of course as technology advances society adapts most opponents to this idea say well eventually there will be new jobs created because of AI but of course we wonder it's quite understandable that people would worry if AIs continue to be developed to get better at more and more things what will be left for us to do and our hopes with regard to desire can tip into the fear that we on the one hand might bring something unnatural or monstrous into our home so that's that uncanny valley the effect that you get nowadays when you see those robots that are supposed to look like humans but they don't really but there's also fears regarding being better than humans so if we have all our desires fulfilled by AIs then that means we become redundant to each other we might not only become obsolete in the workplace but even in our own homes and in our own relationships and finally we can easily imagine how the hope of acquiring dominance turns into its flip side the fear of being dominated the fear of losing control of AI as a tool the sorcerer's apprentice scenario or the Roomba going wild and hovering up your hamster but on the other on another level there is the fear that AIs will acquire minds of their own so that they turn from tools into agents and that robot rebellion theme is really persistent and reveals that there's a paradox at the heart of our relationship with intelligent machines that we want clever tools that can do everything we can and more including be the perfect soldier and then for those tools to fulfill our hopes we give them attributes like intellect and autonomy and of course it's not hard to see the tension in the idea of creating beings that are superhuman in capacity and subhuman in their status so fears of Skynet show that recognition of the deep paradox in creating powerful independent minds enslaved to us which is why so many narratives of robot rebellion so closely parallel narratives of slave rebellion but that's another piece of research I'm working on so that's our eight hopes and fears and last year we decided to look at the role that these eight narratives play in the life of the average British person so we surveyed over a thousand people and the findings of that survey show that the UK population has a markedly negative view of AI so levels of concern where on average they significantly higher the levels of excitement across these narratives and unfortunately concern was higher than excitement even for several of the hopeful narratives and also we had an open question how would you explain AI to a friend and in response nearly 10% of people spontaneously offered negative sentiments instead of explaining what AI is so we titled the paper scary robots because that's literally what someone replied how would you explain AI to a friend scary robots so dystopian visions seem to be so entrenched that large numbers of people are inclined to see the downsides of AI even when presented with holy utopian visions so negotiating the deployment of AI and informing people of what it can and cannot do will have to contend with those entrenched fears that underlie even what seem to look like quite positive stories thank you but so I'm going to introduce I'm going to introduce Malin's video as I said we have a jam packed schedule with a lot of content today so thanks Conta really fascinating research I highly recommend following up and reading some of it also bad host I forgot to say the hashtag the hashtag for today is AI futures if you would like to tweet about this next up I'd like to introduce Madeline Ashby who like Conta is on the advisory board of our AI futures project but unlike Conta couldn't make it today Madeline in addition to being a professional futurist who consults with companies and government agencies is also an accomplished science fiction writer best known for her most recent novel company town and for her machine dynasty series her work on the third and final book of that series is what kept her from being here in person today so she has instead recorded a five-ish minute message they'll serve as a lead-in to our next panel so if we could queue up Madeline's video that would be great My name is Madeline Ashby and I welcome you to this event and I apologize that I cannot be there with you now the reason that I cannot be there with you is that I'm a science fiction writer as well as being a futurist and I am wrapping up the edits on the final book in a trilogy about artificial intelligence the subject of our conversation and it's requiring a lot of my attention in part because I'm realizing that it's sort of the last chance that I have to work with these characters and make a final statement about you know what I was trying to do when I decided that I wanted to write about you know artificial intelligence and the evolution of consciousness and what it would be to be a different kind of consciousness I think that writing about artificial intelligence is basically a logical question it is about what it is to be about it is about you know taking on this otherness and I think that you know one of the challenges when we talk about how we're going to write about AI is that so much has already been written both at the level of myth you know when we talk about stories like the Golem stories like Pinocchio things like that you know those are also artificial intelligence stories but also there's this whole gamut of pop culture stories right and as I'm sort of finishing this trilogy out I realize now that I'm sort of thinking about all of those other you know sort of renditions of this story or all of the other versions of this story and how I can possibly set my characters apart and one of the ways that I've tried to do that is to make sure that my characters make reference to in their dialogue other depictions of robots so in the machine dynasty which is my trilogy about robots who eat each other and you know evil grandmothers and so on and so forth you know these robots make reference to the fact that the word robot comes from the old Slavonic word for slave they are aware of the fact that in popular culture already they have been depicted as you know godless killing machines or sex bots or skin jobs or what have you and they're aware of it they've seen depictions of themselves and I think that you know if you believe that eventually you could achieve that artificial intelligence will achieve sort of an anthropomorphized consciousness or a human like consciousness or even just a mammalian consciousness a mammal like consciousness you know you're talking about something that might later read what you wrote about it the same way as when you blog about your kids there's every possibility that they're gonna find out what you wrote and I think that's one of the most interesting challenges about this is that you know you have to be kind of careful about what it is what expectations are you creating what are you telling this thing to be what are you telling it to become and can you tell it to become something better than you you know can it be better than what you are is it a true evolution can it go beyond you and I think that's sort of one of the most interesting challenges as we frame sort of debates about artificial intelligence debates about what intelligence is what consciousness is you know why is it that we you know think that our version of intelligence is the best you know why is it that human intelligence gets sort of this primacy why is it considered the best isn't that sort of an anthropocentric narcissistic attitude for us to take you know aren't we discounting other models of intelligence whale intelligence dolphin intelligence of bees raven intelligence all of those other models exist on this planet you know they are aliens among us and those are natural intelligences there's nothing artificial about them and yet they are just as foreign to us as some of the fictional things that we are probably talking about today so because I can't be there with you I guess what I would ask you to keep in mind is eventually you might have to explain what you wrote you might have to explain the story you told you might have to explain why you represented an entire type of intelligence in a certain way when we talk about representation in fiction we are often talking about really loaded categories like really sensitive topics really you know we are talking about the subaltern we are talking about marginalized people we are talking about bringing representations forward of people who have been characterized as villainous as evil as depraved as perverse as all of the things that all of the qualities that sort of get penalized later on that are considered bad and so I guess you know when we talk about how we represent artificial intelligence think about the lessons that it might be learning from you is it seeing itself is it seeing itself represented is it seeing itself you know is it seeing the potential for good is it seeing the potential for growth you know we ask that question about ourselves how are we representing ourselves how are we representing different groups of ourselves how are we representing the multiplicity of humanity and we should possibly start considering how we represent the multiplicity of intelligence as well and so I guess that's sort of what I would say hopefully I would be more articulate if I were actually there but edits are pretty killer so good luck guys. Thank you Madeline and good luck with your edits now I'd like to welcome to the stage the panelists for our second panel on AI in sci-fi so coming up folks the moderator Andrew Hudson is a science fiction writer himself and a graduate student at ASU where he studies how speculative futures can better help us imagine how to live through climate change and where he has also been leading the research for our AI policy futures project so thank you for that so Andrew take it away and let's go till 320 instead of 315 on this. Yeah sure thanks everyone and thanks to the rest of my panel we're the fiction panel following up on the fact panel and I thought the fact panel did a really good job laying out some frustrations that I think are very reasonable to have with the science fiction literature that has used this term AI and so you know I'll just say on behalf of sci-fi writers are bad but I think what I hope we can do in this panel is have a slightly more literary discussion to try to answer well why were those the stories that we were telling and like what has been the point of telling those stories even though we don't now necessarily always align with the policy problems that we're having and but what was the use of them so I'll let the rest of my panelists introduce themselves but I was hoping we could start as we go through with responding to Madeline's provocation and say like what are the kinds of blog posts are we leaving about our children human or non and the type of society that they're going to be creating Chris? Sure I have an opportunity to introduce myself as a solo speaker next so I'll be very brief I'm here for being the author of sci-fiinterfaces.com a nerdy blog but I actually think that Madeline's injunction about thinking about your progeny as your audience is I kind of don't want to think about that partially because it will help both my biological progeny ideological progeny understand where they came from better because I don't want to put a veneer on that and lie or change how I what I would say I'm Lee Konstantino I'm a professor in the English department at the University of Maryland college park so I'm a local I teach and write scholarship on science fiction and I'm also a writer of science fiction I've written a novel I've written a bunch of short stories I'm thinking a lot about AI and different projects that I'm working on I don't know if we're going to introduce ourselves first and then answer the question but I to Madeline's like a provocation you know the thing that came to mind is that the person who writes the blog post about their child is really in a way not writing about their children at all they're often writing about themselves writing about their own hopes and aspirations and one thing I would say about a lot of our science fiction narratives is that they're often about that feature AI is that they're often not really about AI in any kind of technical sense they're not engaging in the project of forecasting they're not trying to give us a technical blueprint for the future and so you know to our AI progeny who will watch this video I say it wasn't about you at all it was all about us you know like well I'm Canta Villal I've just been introduced by Kevin so I'll just go straight to the question so well I don't have children of my own but I do have a strong belief that yes you might want to keep them in mind when you publicly write about them because I recall a friend showing me a blog post that a pregnant family member had and her musings on how she hoped that this child was going to turn out healthy because otherwise she didn't want it which brings me to the idea of and I guess it must be mentioned or maybe I'll do you all by saying this but Rocco's Basilisk is the sort of thought experiment to terrify your children before going to bed story that if you know that there is a super intelligent that a super intelligence is going to exist in the future then you have to bear in mind that it is going to know everything you do in your life so you better dedicate your life to making sure that this super intelligence is going to be built and not hinder it because otherwise that super intelligence will make a copy of your brain and torture it into eternity in cyberspace yeah so now you all have to go out and do that and write nice blogs about the AI I'm Damien Williams I am a PhD researcher at Virginia Tech University my work is in science technology and society I'm researching the ways that bias and values get embedded into technological and non-technological systems specifically looking at artificial intelligence, machine learning, human biotechnological interventions such as prostheses, implants, other what people might think of as cyborg implements I'm going to use the word bias there which is kind of why my question was what it was earlier I mean both in terms of perspectives but also in terms of models but also in terms of the things that undergird would eventually become prejudices bias in that understanding is another way of thinking about what it is that we model for and try to predict based off of a professor's degree is a combination of philosophy and religious studies and so this conversation about what it is that we leave for our children and what the basilisk might do and what the mind is and what it is that consciousness might be and be modeled as in these stories all of those things are pretty pertinent from Madeline's provocation I think that we do have to kind of think about our children our progeny but it don't think that necessarily requires that we change what we say but it means giving context to what we say and why we say it my own parents I want them to be honest with me about what they feel and I don't necessarily always have direct access to exactly why they feel when they feel it we're talking about a thing in Madeline's provocation we don't have the access to look at the context of literally everything all the time forever so in that sense if we're talking about a progeny which we'll be able to reach back and see why we thought what we thought when we thought it I think we should be careful not just what we say but be careful to be willing to think about why it is we feel what we feel and not just toss ideas out there without that context and I think that's about communication I think that's about not just hedging our bets so the basilisk doesn't kill us or torture a copy of our brains forever and ever I think it's about being willing to be open and communicative with another mind that while it might be drastically different from ours is still made from us and I think that's just parenting in any capacity Well hopefully context is kind of what we can give some today around some of the stories that have shaped a lot of the mythos that we've built up so I want to go back to Conta's hopes and fears dichotomies which I think are really fascinating and maybe ask the panel are these a reflection of the way sci-fi has played into narratives that we already had in our society about the majorities versus the marginalized or versus minorities or versus the outsiders and how have maybe some of the core AI stories forwarded those narratives or produced counter-narratives So I said in my previous answer that science fiction narratives about AI are often allegorical in their scope and one of the main or great allegorical subjects of science fiction about AI is the question of power and authority and domination in your talk I think outlined so beautifully and so I think what we sort of find in our science fiction narratives about AI are like every possible combination of forms of domination so you get AIs that kill all humans humans who in one way or the other are dominating or torturing AIs you can think of a narrative I mean like Westworld or Ex Machina where the AIs could arguably be said to have good reason for rebelling against their human masters you get works of science fiction like Dune or Battlestar Galactica where there is a prior AI revolution or AI uprising that leads to the elimination or extermination of AI and I think you know like you get all of these variants and they're often not very nuanced you know they pick a side, they pick a trajectory and I think the most interesting science fiction is finding a kind of more nuanced or kind of pluralistic vision of what AI might be that's breaking out of these tropes so a recent book by the novelist Annalee Newitz her book Autonomous I think is one of actually the best visions of a world in which AIs come in all shapes and sizes they're embodied in a variety of ways they have political opinions they're kind of wrong, misguided foolish, courageous and they're not quite human at the same time and so I think like a promising science fiction is sort of science fiction that is moving in that more complex direction for me, for my test and I don't know if that answers your question but... Chris I know you taste a little more poppy what do you see in this type of predominant narrative versus like counter narrative? I do study big budget films and television shows mostly and those the creators of those stories are always hedging their bets because they want to make as much money as possible with their stories and that means that they can only go too far outside of a paradigm before they begin to lose that. Primer is a great film about time travel but it is not accessible to the majority of pop sci-fi viewers and that dichotomy of yes I can get Thor, he's a dude with a hammer or I can't quite understand the angler fish metaphor of under the skin means that the things I study tend to be on this safer side of sci-fi and what I see across the narratives that I analyze is that they work on a principle of what you know plus one so to abuse a phrase and unfortunately I can't remember the fellow who coined it but like what if phones but too much or Daniel Warburg they can't really go to the extent of you can't waste 20 minutes of an audience's time with a giant background in order to explain why this moment that you're about to see in the cinema is relevant. They have to play it fast and quick and that keeps the stories in cinema and television fairly less risky. Is this a fair way to spin out from your dichotomies? Yes, definitely and when you're looking at the relationship between these kinds of narratives and the older narrative traditions that they fit in again it's almost as if AI is the sort of hyperbolic version of technology making everything possible but it's very similar to narratives of flying. I mean flying was a dream a technological dream for thousands of years until it actually happened and it took a form very much unlike what had been imagined in all those narratives. There was no wing flapping and there were no steam engines up in the air but we could fly and we can fly and nowadays it's just really everyday business so in the same sense these stories about AI are in all kinds of ways anticipating relating ourselves to intelligent machines and so on representation and counter narratives I think one thing that many stories of AI make clear is they presume or at first sight they are about they seem to be about humans versus non-humans so humankind as this one globule in which all of us here and everyone out there is included versus the rest and the same with narratives of aliens but what these narratives actually reveal is that humanity is something that is granted as a matter of degree. Some people are considered more human than others and when you get an intelligent machine that one slots into that hierarchy and shakes that hierarchy and intelligence is actually a way in which that hierarchy has been maintained with the things like here in the US context the SAT being developed by a eugenicist in order to keep people of colour out of the universities so intelligence as this benchmark for how human something or someone is gets really problematic when you bring in an artificial intelligence that might be more intelligent because that one might start poking all the way at the top saying excuse me I'm at the top now according to your benchmarks and that's where people like Elon Musk start worrying. I really like the flying question and one question that I have heard that I find really provocative is does a submarine swim and the question of whether a machine thinks it would actually be as arbitrary as are we saying that a plane why is a plane fly but we don't really like saying that a submarine swims it's just sort of a gimmick of language but yeah to your other point it seems like AI stands in for the other in lots of allegorical stories and so maybe Damien can you give us some examples of this if you have any and is it helpful to have these types of stories now that we're talking about the ways that real life AI systems other human beings? To answer your second question first, yes to answer your first question next the examples that we have go down through history as we kind of talked about a number of times and Ashby brought this up in her recorded talk the word for slave comes from but that's from a piece called RUR, Rossum's Universal Robots and that's about an oppressed working class that were enslaved and made into a group of workers they were made to be these workers but there's also instances where we talk about we can even look back to when robots were being promised to everybody in just IBM copy and this idea that everybody would have a robot slave of their own, like that was literally like ad copy that was in magazines, like the days of slavery will come back don't worry, we don't mean humans so it's always been this kind of this undercurrent of the notion of the oppressed, the marginalized, the uprising and kind of overcoming and the tension between on the one hand we think that's right and we think it's justified and on the other hand we're scared of it because it'll be uprising against us we have that in Westworld from the original, we have that in all of Asimov stories, we have that in basically anything with a machine intelligence that somehow turns its own creation into a fact of making humans and its creators obsolescent, that kind of process of obsolescence becomes the stand in for how we become the ones that got overthrown, whoever expected this could happen to us and I think that it's important that we still think about not necessarily in the same dynamics of those kinds of slave narratives of oppression but in terms of marginalized peoples and thinking about the ways that we look at the robots are often stand-ins for even when they're not representing overthrowers, they're often stand-ins for people with non-standard or neurodiverse positionalities in the world, for autistic people, for people with ADHD for people who think and see and experience the world differently and there's often in even just our linguistic conceits there's a line drawn between neurodiverse populations and robotic-ness or machine-like qualities and so that's why I think the answer to your second question is yes, it has to be investigated, we still have to think about these things because even as we are creating systems which other people, which take in data points or are constructed at the very outset in such a way that they will marginalize or further oppress, they are still going to be used as touch points and metaphors for talking about the very people and the very populations when they are oppressing and we have to take the time to render out in stories a model for thinking differently about that, for specifically interrogating that question for saying, well, isn't an oppressed person right to overthrow their oppressor? Isn't someone who sees the world differently right to question the metrics by which they're being judged? That's one way to read Blade Runner, by the way. There's a burgeoning host of autists, people with autism who are looking at Blade Runner and going maybe the problem isn't that these stand-ins for autistic people don't feel or don't feel the right way. Maybe they feel too much. Maybe the way that they feel is present but different enough that the humans in their capacity don't understand what it is that they're feeling and are re-interpreting that narrative in that way. And so thinking about how we take those narratives of oppression and specifically ask, well, what if the people who are being modeled or mirrored here are the ones who get to tell the story? What story would they tell about this instead? That question becomes deeply, deeply important. Specifically because if it's not interrogated it will be used to further marginalize them to further disenfranchise them from the tools that are being used to operate and control their lives. That is a great reading of Blade Runner that I wasn't familiar with yet because the reading of Blade Runner that is most often advanced and that is being used for lots of different narratives about artificial intelligence is that the slave narrative. So the AI stands in for the oppressed racial other. The same with again aliens. I'm thinking for instance of the film District 9 which shows racial segregation except it's humans versus the aliens. And in both these cases Blade Runner and District 9 you can see that by means of having the AI and the alien as the racial other you presume that all the humans are white. You need no racial diversity among your humans because you have a racial other. And you can see that in Blade Runner these are fugitive slaves, all the androids are white nearly all the humans and as far as I remember that's any better in the new Blade Runner. And in District 9 for the fact that it's set in South Africa again very few non-white human protagonists. The number of black South Africans who appear in District 9 is I want to say something around the order of 12 total and they are basically a faceless gang. Yeah and are they supposed to be racial stereotypes of Nigerians? Yeah and so yeah that's taking the time to again just specifically dig down on those facts and say we have told this othering story for so long and it has made its way into the process of what it is that we build these things to do if not to be what if we did this otherwise, oughtn't we do this otherwise and taking the time to do so. I think there's lots of ways in which that pattern also shows up in other genres. There's so many ways in which A.I. stories to my mind are replicate horror tropes. The androids are zombies, the disembodied Siri voices are ghosts so we're in a well-tried literary tradition here one way or another anyone else on this question? So one thing that I think is unique about this A.I. discourse that we're having is that it goes back a long way in some ways much further than we've been talking about what ifs of A.I. way before we started having organizations that put A.I. and their sort of hype notes and now we're here but there's been a whole evolution of this conversation along the way and so Chris maybe I know you have some data on how we talked about A.I. has evolved over the last century. So in the analysis that I'm going to share with in sort of the solo talk one of the things I took a look at was the valence and the prevalence of which narratives have been told when from the beginning of cinema to now. There are four main eras if you will and this data isn't in the solo talks I'm happy to explicate it. We're going to bypass Lavoyage dans la Lune partially because it was a piece of vaudeville that was put to film and regard metropolis as the first serious piece of science fiction and Fritz Lang's masterpiece was the sort of beginning of this very dark dystopian era where especially European filmmakers were using technology to illustrate the evils of the industrial revolution and so the very beginning of A.I. in sci-fi was just it's terrible it's dark it's going to require us to feed our children to the machines then starting with Robbie the robot in Forbidden Planet there was an era of positivity and almost sort of like American advertising for how awesome A.I. will be able to be like look they won't even be able to disobey you without short circuiting won't that be marvelous and that period lasted probably up until the 80s and things such as RoboCop began to question about well maybe it's not as pretty as because by then of course America had become sort of the cinematic juggernaut of the world began to admit that maybe it's not going to be all Robby's in the world and so it was a period of investigating the complications and in fact that was the emergence of evil A.I. rather than sort of a systemic machine like we saw in Metropolis so we see things like the horrible Proteus IV in Dean Seed that just comes right out the gate is evil it's also a period of unquestioning Genesis narratives like champagne on a keyboard brings a computer to life or a lightning bolt strikes a plane and suddenly it wants to rebel it was really with the and that continued up until the A.I. 2000 A.I. in that final period is where we're beginning to deal with the realities and the nuances of A.I. and even get into that sort of otherness what does it mean to be other and that's sort of where we are what's most interesting about these trends is they don't quite follow the science the peaks and valleys of A.I. hype and the A.I. winters there's not a tight correlation which I would have expected so those are the sort of four big eras there are lots of other analysis but not to go into them I'm curious if anyone else has thoughts on what are some of the highlights of those moments and maybe some works that you didn't mention that define some of those types of waves I mean so one interesting way to back these trends might be to look at the way like a franchise like Star Trek treats A.I. and so the latest season of Star Trek Discovery has an evil A.I. from I think it's from the future as its main enemy yeah I'm sorry yeah well yeah I'm going to ruin it all but I didn't really spoil anything but it's kind of an unusual and it's tied up with the kind of the origin of the utopian society that is the Federation and this is a show that's ostensibly set in the past of the franchise but it's a much more morally ambivalent darker vision of A.I. of the kind of the use of these systems compared say to like if you remember the holographic doctor from Star Trek Voyager and help and many plot lines are dedicated to exploring his emerging humanity or data from Star Trek the next generation and so it does seem that like a franchise like Star Trek would be an interesting way to think about public sentiments about A.I. and how they're changing and related to that you can see the same happening in Star Wars where initially you have this sort of unquestioned the A.I.s are comic relief moving towards well the most recent Star Wars films where it's much more ambivalent so in Solo there is an A.I. who stands up for robot rights and who goes to robot fighting pits and tells the robots who fight in these pits that they don't have to have such a life that they have free will and she claims that she has a romantic relationship with her human co-pilot that's quite a different way of looking at then sort of R2D2 beeping around a bit and R2D2 in one of the classic scenes in Star Wars being told like we don't serve your kind here right like Joy's gotta leave but yeah that's a pretty big jump. We've got we're going until 320 is that right? Yeah so I guess I did you have any other highlights you wanted to add Dan? I was thinking about actually kind of a tandem across these Robocop and thinking about Robocop 1986 versus Robocop 2014 and the different portrayals of what that kind of police state drone warfare, robotics and A.I. narratives what those looked like like you can see a lot of similarity between the two obviously because it's just a straight remake but there's also nuance in what the characters and interior to the narrative consider to be the problem of what has happened here versus in Robocop 1986 it was the oh no Murphy's not Murphy anymore or is he and while that plays in somewhat in the remake that changes to be more about not just is he still himself but he's been turned into and he like they very clearly show that his automated systems can be turned on and he can be made into literally a piloted drone in human you know bipedal form and so that shift about automated war fighting and the militarization of police and the automation of the militarization of police becomes much more the current fear in 2014 versus this notion of how do we stop crime in Detroit and oh no is that person still really a person in 1986. There's also a shifting role of the state versus corporatism across the two films. I hesitate to mention Robocop too but the Robocop 3 I think we're fine. Is this why I can bring in that I've always maintained that Inspector Gadget is a parody of Robocop. Fantastic. The shifting role of the state puts me in mind of recently read a pretty old sci-fi story I think it was by Asimov called Franchise which is I think from the 50s but it was of course set in 2008 and in it the supercomputer multivac figures out who is the exact one person you need to pull to figure out how to pick the president and how to decide all the elections and I was thinking about that in comparison with a more recent incarnation of the all seeing from person of interest the machine and how that's very much like a surveillance state and what the machine does and its evil counterparts is not based on who gets to vote and figure out which I think was a much contested question in the 50s the machine is like who gets eliminated by the anti-terror you know kill squads right and so that shift I think probably we can track to our own political discourse. So just I want to touch on kind of one more thing and then we'll take some questions to come back last time to the hopes and fears I know Kanta you are now doing some research that explores a much broader swap of A.I. narratives than maybe we have even discussed here today so are those hopes and fears do you feel like those are inherently western hopes and fears and that other cultures and other societies and even other genres might have a different take on A.I.? Yeah so this is the project that I lead called Global A.I. Narratives and well we re-asked that rather than us doing all the research and basically us trying to look at everything that's done across the world we're building a network of scholars who in their own regions are experts on this and bringing those together so that we can compare and get answers to questions like this so so far we have done so in Singapore and in Japan and at the Japan workshop that was indeed some fascinating revelations especially about what does the media image of an A.I. look like and in Japan so when we would hear have the terminator or even as I showed in my presentation two terminators and a nuclear explosion because it can't be dramatic enough in Japan the most common go to image is a poji blue cartoon cat called Doraemon anyone familiar with Doraemon so Doraemon was a really long running TV series and manga series and this was something that people grew up with and especially the generation well basically age 30 and above in Japan and that's why that narrative is so much more influential and yeah it's a cutesy cat and also it is an android a robot but shaped like a cat from the future and it tries to solve problems that the human protagonist runs into by means of grabbing futuristic tools from its pouch and every time there's a new tool that's supposed to be able to fix all the problems and then it doesn't so that's a completely different narrative of the robot buddy and an hopeful one interesting and I had a quick chat before we spoke at a conference and she noted that existential threats or let me say status threats that AI and robots pose are not a problem as her research has shown in Japan or Shinto societies because they already have this notion that everything has a spirit so the fact that that spirit is embodied in technology is really an issue so it's really a new concept for us that our washing machine might have a hope or a fear of its own but not necessarily for Shinto practitioners wasn't that visualized very elegantly in one of the shorts in the Love Death and Robots show where you had a spirit fox being hunted in this sort of medieval Japanese society and over time as society evolves it gets turned into a steampunk cyborg basically and the I totally agree with the cultural tone and what was possible I think was very different from that there's also dystopian narratives that can differ quite dramatically across cultures so for instance if you're talking about the apocalypse in sort of mainstream science fiction it is something in the future it's something that can be averted it's something that an apocalypse story is a warning now for many societies across the world the apocalypse has already happened I mean if you look at Native Americans the apocalypse has already happened so the stories that you get about the future are very different informed by such a past great well I think we'll take some questions now so we've got we have mics going around great back there on the right we have a powered persuasion architecture it seems that there's algorithms that know us better than we know ourselves and if going as far as not only to get us to spend money but sway elections maybe instigate ethnic cleansing what are the what were the science fiction warnings that we missed for this I don't remember you know these are modern myths and the myths told us to know us know thyself but doesn't seem we've been getting that recently or have diagnosed that there's a positive version of that I can think of an example which is that the long arc of the iRobot series told of a bigger and bigger AI that was influencing society but at the end of the short stories it was actually it had faded so far into the background and humans had just become prosperous and they didn't even make the connection quite the warning that you're looking for but I know that that was Azimov's ultimate arc for iRobot other examples of broadly dystopian AIs perhaps not so much the stories of intelligent machines although okay so there is colossus by D.F. Jones as a novel and it was turned into a film in the mid fifties which was basically about the US builds a computer that can control 70s so the US has a defense supercomputer and then it turns out that the soviets also have a defense supercomputer and they decide that they know what's best for humanity based on the cultural and political system in which they have been produced and then it starts saying okay surrender all your power to us humans, humans say no, colossus says well you have given me access to all your nukes, colossus throws nukes like humans have to obey more recent example is actually person of interest the latter arc of person of interest and again spoilers but the show's been over for four years now and it's all on Netflix so you have no excuse the culminating arc is about to competing AI supercomputers who are warring against each other to launch people in a certain direction the name of the supercomputer in like the US military arc of it in the thing is called Northern Lights and that was made about two years before Edward Snowden's prism leaks they got to a point where they actually had to say okay we need to completely reframe the story that we are telling because the things that we have been talking about as soon as we write them they happen so we need to think differently about what it is we are considering science fiction about AI to be and that was entirely about a very large system of algorithmic nudging and influence moving people through what they thought was just the water of their lives I think what's delightful about the question is that the bigger technologies the less we thought about them lots of older films that are having to be are having to account for the fact of cell phones which of course instant access to any person on the planet was not a narrative possibility when stories were told in the 50s so when they like remake the blob they have to think about it or that'll start a lecture has to think well why would we disable the networks I know the silence have access to it so I think Facebook is actually one of those technologies that is so pervasive and the surveillance that's involved and the television that it has took everyone by surprise it's a great question it could also be that we told those stories but we told them about television instead of Facebook right and I think we learned good lessons about TV being this problematic medium that maybe we forgot when we switched mediums I'm also wondering if Santa Claus was sure but I've also heard you say during your presentation that a people tend to go to extremes when they're describing it also you mentioned that 70% of British citizens have a very dystopian view of the technology so I guess the question I would ask is assuming that a narrative about AI is rooted in a time and place what are we missing today what are what are our blind spots I mean there are other answers to that but I literally did a longitudinal study of science fiction tropes to find out the stories we're not telling so if you can wait there might be one more question because that clocks as we have one more minute can we can we get it you're to talk but I'm Mike Nelson with Georgetown University teaching the communications culture and technology program this has been fascinating you've talked a lot about the robot overlords scenario where we give them all the power and they use it you've talked a lot about the robot underclass that rises up but there's another scenario that appears less frequently and that is the robot underlords who kind of take over the basic grunt work of civilization slowly work up the stack until they're sort of taking care of all our needs and then civilization dies of boredom and complicity Wally is the best one Kurt Vonnegut player piano Kurt Vonnegut and the Talfall Magorians do you have other examples and how likely do you think that we'll just be so lazy we'll cease to challenge ourselves there was a screenshot from Wally illustrating it it's a fairly common one usually it has to do with work but you also have the sort of social obsolescence so everybody immediately jumped on Wally you said Bradbury? I cannot remember the name of it off the top of my head but it's entirely about the future probably Martian civilization in which the people are gone because the automated systems of their house and the automated systems of their world took care of everything to the point where they had no reason to do anything that's for your secondary question I honestly don't think that's very likely I think we get bored easy we innovate on that boredom real well there are a few ways to amuse ourselves even if they're just remixes of old ways and also the more tasks that we manage to give to machines and robots and computers the busier we seem to get the original series Spock's Brain is another example well we're going to have to stop there thank you to our panelists very much Kevin is going to Thanks everybody next up I'd like to reintroduce Chris Nossel who in addition to his day job as a lead designer with IBM's Watson team somehow finds the time to write books like Designing Agentive Technology AI that works for people and a personal favorite of mine make it so interaction design lessons from science fiction which led to him establishing the sci-fi interfaces.com blog where he's been doing some amazing work saying the stories we are and aren't telling each other about AI which he's going to talk about right now in our third and final solo talk they're ready for me my new end time is 15 is that right 35 37 there's a graphic there we are thank you for that introduction that saves me a little bit of time in what I'm about to do I am an author I'm a designer of non Watson AI at IBM in my day job and I am here to talk to you about a study that I've done for sci-fi interfaces.com let me begin with a hypothetical and let's say we were to go out and ask take a poll the Vox Populi the voice in the street and ask them what role would you say that AI should play in medical diagnosis then we should think about what their answers would be if we showed them this Baymax from the big hero 6 movie then to think about how their answers would change if we then showed them this which is the holographic doctor that we just mentioned from the Voyager series of Star Trek and then how of course with their answers change if we reminded them of Ash from Alien who was ostensibly a doctor on that ship right these examples serve to illustrate that how people think about AI depends largely on how they know AI and to the point how most people know AI is through science fiction which sort of raises the question yeah what stories are we telling ourselves about AI in science fiction so I first came on this question doing an AI retreat in Norway that was an unconference that was sort of sprung on us and they said okay what do you want to do here and I had just completed an analysis of the Forbidden Planet movie in the context of the Fermi Paradox and it required me to sort of do this really broad scope analysis unlike the normal ones that I do in the blog so I simply asked that one but to answer that question takes a lot I thought of course that I could do it in like a two hour conference setting but no it took me several months after I got home from that setting because what I needed to do was look at all of science fiction movies and television shows and that's quite a lot I don't think I've captured them all and of course I am bound by English speaking for the most part I am certainly bound by movies and television but I wound up with 147 titles in total that I included and actually all the data is live in a Google sheet that you can access if you like but I took a look at each one of those titles and I tried to interpolate what the takeaway is I said okay if you were to watch this story and leave the cinema or get up off your couch and be asked the question so what should we do about AI that led to a series of takeaways and those takeaways run quite the gamut everything from evil will use AI for evil to AI will seek to subjugate us which is the perennial terminator example but of course even the sentence in the matrix in the diagram that I'm slowly building behind you the bigger text actually represented the things that were seen more commonly throughout the survey of movies it also included things like AI will be useful servants I mentioned that sort of happy era of sci-fi AI Robbie is part of that and much more recently Baymax includes things like AI is just straight up people like you turn on the machine and it's trying to kill you there are comic examples like the robot devil from Futurama but also bad and disturbing movies like Demon Seed with the Proteus 4 AI so once I did that I had 35 takeaways that all connected back to the 147 properties that I had gathered together and if you look into the website you can actually see it's hard to see in this projection but there are lines that connect which movies and which TV shows connect to which takeaways so if you're enough of a nerd like me you can actually study and say where does RoboCop fit in all this so that was my analysis of what stories are we currently telling it's a bottom up analysis it's a folk sonomy but it gave me a basis and how do we know what stories we should tell about AI that's a tough one it's a big value judgment I'm certainly not going to make it so I let some other people make it and those particular people were the people who had produced think tank pieces thought pieces or written books on the larger subject of AI I thought I would have a lot more than I did I wound up with only 14 manifestos that they include everything from the AA AI presidential panel on long term AI futures to the future of life institute the Miri mission statement open AI and Nick Bostrom's book but from these 14 manifestos I read them one at a time and instead of takeaways which is like what we got from the shows on this side I was able to say okay well what do they just directly recommend we do about AI that also give me another list that list includes things like artificial general's intelligence's goals must be aligned with ours or AI must be valid it must not do what we don't want it's a nuanced thought but similar to these takeaways in this diagram you'll see that the texts that are larger were more represented in the manifestos it included things like we should ensure equitable benefits especially against ultra capitalist AI and this is a really tiny one we must set up a watch for malicious AI all the way down to the bottom we must fund AI research we must manage labor markets upended by AI I won't go through all of these I don't have time but in total there were 54 imperatives that I could sort of pull out from a comparative study of those manifestos and so from science fiction we have a set of imperatives on the right from manifestos and really it's just a matter of running a diff if you know that computer terminology but it's to be able to say okay what of here maps to what of here and then what's left over again this is a lot of data and I did produce a single graphic that you can see at that URL I'll show it several times in case you want to write it down so 100 plus years of sci-fi shows suggest this and AI manifestos suggest this and then I ran the diff there are some lines there that are hard to see from this document the main thing that we find is of course there are some things that map from the left to the right and those are stories that we are telling that we should keep on telling and those are not the interesting ones the interesting ones are the ones that don't connect across so this is the list of those takeaways from fiction that don't appear in the manifestos these we can think of things that are just pure fiction things we need to stop telling ourselves because they if we trust the scientists as being the guideposts for our narrative they include things like AI's evil out of the gate now of course there's an imperative way up there that says evil people will use AI for evil and that's still in but this one right here nobody believes that AI is just an evil material that we should never touch interestingly those manifestos are not interested in the citizenship of AI partially because that's entailed in general AI which manifestos are much more concerned about the near term here and now and that includes things like oh there will be regular citizens versus there will be special citizens and even this notion that AI will want to become human sorry data sorry Star Trek so there is a list of pure fiction ways that we should stop telling ourselves that was not the point of the study the point of the study that I wanted to do was on the other side and that's the list of things that manifestos tell us that we ought to be talking about in science fiction but we're not they include everything like AI reasoning must be explainable and understandable I completed this right around the time of the GDPR so I'm really happy that that's out there but it includes things like we should enable human like learning capabilities at a very foundational level it's got to be reliable because if it's not and we depend upon it what happens it includes things like we must create effective public policy that includes effective liability humanitarian and criminal justice laws includes things like finding new metrics for measuring the effects of AI and its capabilities and again I'm not going to go into those individual things they're fascinating and you can look at all blog posts in order to read them all and there's lots of analysis that I did all over this thing like that's the set of takeaways if you want to know who produces the what country produces the sci-fi that is closest to the science turns out that it's Britain the country that's most obsessed with sci-fi is surprisingly Australia and of course the most prolific for AI shows is the United States and far far behind India in our actual production of movies in total this is a diagram of the valence of sci-fi over time if you're interested it's slowly improving but it hasn't reached positive yet and then I even did an analysis of the takeaways that we have in science fiction based on their tomato meter readings from Rotten Tomatoes so you can actually see which ones if you're making a sci-fi movie which takeaways you can bet on and which one of them you should probably avoid just for the ratings but this is all stuff that's entailed in the longer series of blog posts I also include an analysis of what shows stick to the science the best in order to sort of reward them and raise more attention Damien mentioned Person of Interest and that's number one in this analysis but it includes things like Colossus the Forbidden Project the first alien psychopaths the movie which is the only anime that made this particular list and even I don't like the movie but the AI in it is pretty tight with Prometheus I also included a series of prompts which is to say okay if I were to give a writer's prompt about some of these ideas can I spark some this is an example what if Sherlock Holmes was an inductive AI and Watson was the comparatively stupid human whose job was to babysit it Watson discovers that Holmes created the AI Moriarty for job security so I tried to put these prompts out there to see if anyone would take the bait so far no one has but I'm doing my part and even some of those things I have begun to write on myself since no one else had taken the bait and tried my first hand at a near term narrow AI problem with the publication of this self publication of this last year okay so that's a lot to take in and I understand that it covers like 17,000 words or something on the blog and so what I wanted to do to summarize all this is what I did on the sort of poster that I created which is to read off those things the sort of five categories of findings that I found these are nuanced so I'm going to read them the first category of things stories we should be telling ourselves is about that we should build the right AI narrow AI must be made ethically and transparently and equitably or it stands to be a tool used by evil forces to take advantage of global systems and just make things worse as we work towards general AI we have to ensure that it's verified valid secure and controllable and we must also be certain that it's incentives are aligned with human welfare before we allow it to evolve into intelligence and therefore out of our control sadly sci-fi misses about two thirds of this in the stories that it tells and that's largely I think because of sort of they're not telling stories about how we make AI good AI the next category is we should build the AI right so this is really talking about the process like what do we do as we're constructing the thing so we must take care that we are able to go about the building of AI cooperatively ethically and effectively the right people should be in the room throughout to ensure diverse perspectives and equitable results or if we use the wrong people or the wrong tools it affects our ability to build the right AI or more to the point it'll result in an AI that's wrong and some critical point sci-fi misses most of this nearly 75% of these imperatives from the manifestos just aren't present in AI the third out of five is that it's our job to manage the risks and the effects of AI and there weren't a ton of takeaways related to this so it means that the it's a very crude sort of metric but we pursue AI because it carries so much promise to solve so many problems at a scale that humans have never been able to manage ourselves but AI's carry with them risks that scale as the thing becomes more powerful so we need ways to clearly understand test and articulate those risks so that we can be proactive about avoiding them the fourth out of five is that we have to monitor AI AI that is deterministic isn't really worth the name of AI but building non-deterministic AI means that it's also somewhat unpredictable we don't know what it's going to do to us and can allow for bad faith providers to encode their own interests in the effects so to watch out for that and to know if it's effective well-intended AI is or if well-intended AI is going off the rails we have to establish metrics for its capabilities its performance and its rationale and then build the monitors that monitor those things we only get about half this right and the last sort of super category in the report card of science fiction is that we should encourage accurate cultural narratives and it's very low contrast but we just don't talk about this we don't talk about telling stories about AI in sci-fi very much if at all certainly not in the survey at all right but if we mismanage that narrative we stand to negatively impact public perception and certainly legislators to the point of this thing and even like encourage Luddite mobs which nobody needs okay so that's the total report card the short form takeaway from sci-fi as compared to sci-fi manifestos and the total grade if you will is only about 36.7% sci-fi is not doing great but that's okay right we should have tools such as this analysis in order to poke at the makers of sci-fi and even to encourage other creators to create new and better and more well aligned AI and that's part of why I've done and part of why I'm trying to popularize the project about it I'm repeating that URL here for you if you're really curious about this kind of work I wrapped up the untold AI last year on the blog I'm dedicating the entire year of 2019 to analyzing AI and sci-fi but right now I'm in the middle of a process of analyzing gender and its correlations across things like embodiment, subservience and germanness and you can see that gendered AI on the sci-fi interfaces blog and that's it I am done with one minute so I have an extra minute if there's any time for questions Thank you. Did you have time? We're going to move on. Alright contact me later with questions because we gotta go. Thanks sir. Thanks Chris and just a reminder there will be a reception afterwards so you can ask our panelists and speakers questions then. Now I'd like to introduce another brief video provocation from another one of our advisory board members. Chris is also an advisory board member, thank you for that Chris who couldn't make it today, Stephanie Dinkins Stephanie is a trans media artist and associate professor of art at Stony Brook University who's focused on creating platforms for dialogue about AI as it intersects race, gender, aging and our future histories. She is particularly driven to work with communities of color to co-create inclusive, fair and ethical AI ecosystems. One of her major projects over the past few years has been a fascinating ongoing series of recorded dialogues between her and a sophisticated social robot named Bina48 to interrogate issues of self and identity and community and it's Bina48 who is the robot, I guess which is the robot that is pictured at the beginning of this video message that Stephanie created for us today. So if we could run the clip I wonder what happens when an insular subset of society encodes governing systems intended for use by the majority of the planet. What happens when those writing the rules in this case, we will call it code, might not know care about or deliberately consider the needs, desires or traditions of people's their work impacts. What happens if the code making decisions is disproportionately informed by biased data, systemic injustice and misdeeds committed to preserving wealth for the good of the people. I am reminded that the authors of the Declaration of Independence, a small group of white men acting on behalf of the nation, did not extend rights and privileges to folks like me, mainly black people and women. Laws and code operate similarly to protect the rights of those that create them. I worry that AI development, which is reliant on the privileges of whiteness, men and money, cannot produce an AI mediated world of trust and compassion that serves the global majority in an equitable, inclusive and accountable manner. AI is already quietly reshaping systems of trust, industry government, justice, medicine and indeed personhood. Ultimately, we must consider whether AI will magnify and perpetuate existing injustice, or will we enter a new era of computationally augmented humans working amicably beside self-driven AI partners? The answer, of course, depends on our willingness to dislodge the stubborn civil rights transgressions and prejudices that divide us. After all, AI and its related technologies carry the foibles of their makers. Artificial intelligence presents us with the challenge of reckoning with our skewed histories instead of embedding them in algorithms while working to counterbalance our biases and finding a way to genuinely recognize ourselves in each other so that the systems and policy we create function for everyone. I see this moment as an opportunity to expand rather than further homogenize what it means to be human through and alongside AI technologies. This implies changes in many systems, education, government, labor and protest to name a few. All are opportunities if we, the people, demand them and our leaders are brave enough to take them on. Stephanie, thank you so much for putting that together for us. We are now going to transition to our third and final panel of the day. We've had AI in fact, we've had AI in fiction and now we're going to talk about bridging the two. So this one will be led by Ed who you've already met and so take it away Ed. Thank you Kevin. Come on up friends. So, yeah, AI, we had facts, we had fiction so this is going to be faction or maybe we're all faked. But either way, I wanted to start this is going to be a conversation about science fiction not just as a cultural phenomenon or a body of work of different kinds but also as a kind of method or a tool. And so I wanted to just start and ask you again with that clever trip of having you introduce yourselves to talk a little bit about how you see science fiction operating in your worlds outside the boundaries of when it's not working as fiction, when it's doing something else in the world. So some observations about how you've seen that working in your own professional trajectories. Hi, so my name is Malka Older and I'm a science fiction author. So I actually say part of my job is to encourage science fiction to work beyond the boundaries of recreational fiction so to speak. But I'm also a sociologist and academic which has become very interesting because I get asked to speak at more academic conferences about my fiction books than I do about my academic work which is very difficult for my department to understand. And I've also started to get asked to speak as kind of a futurist to various groups that are interested in knowing what I think will happen in the future. And so I'm really happy that you pointed out the idea of method because one thing that I've found very interesting when I'm asked to make up futures and then tell people about them is that sometimes the questions are not just about what I've said or how they disagree or what they agree or what the implications are but how I did it and how I go about world building in my books and what I try to draw from reality and how do I keep it rooted. And so I've started doing a lot of thinking around that and I think that it's a really important topic for us to touch on. Hey everyone, so my name is Ash Khan Sultani. I'm a technologist and I work into policy and most of my work really involves translating kind of technical complex subjects for folks that make policy to help them understand. And this is where kind of metaphor for me is really critical finding the precise metaphor that articulates the principles of the thing that I want to describe but it's still accessible and maintains the consistency of the thing that I'm trying to describe and you know, the right if folks remember a lake off you know, the metaphor shapes the frame and the questions and the considerations that come to mind and for things that exist already you can often find a metaphor that, so there are some things that you can find a metaphor for easily and for the things that are kind of forward looking and don't have a physical metaphor in the real world, this is where storytelling comes in and particularly sci-fi where you can imagine things in an inaccessible way and kind of help people wrap their heads around the nuances of the thing by immersing them into the story and then understanding the contours. And I think particularly I'm a fan of the kind of what you know plus one frame and in fact it was kind of, some people said that repetition isn't helpful so as long as you can get away from the cliche and really still engage the person, it helps people think about one step beyond and one step beyond what they currently know and why that's helpful is actually often there's kind of an inflection point where it's a non linear trajectory around things we care about and I think again sci-fi around AI is really useful for understanding some of the things I really care about which is like privacy and security around for example things to do with scale, right, so we talked about enforcing policy through an automated system or one of the things that it does which Kevin and I have written about quite a bit in the past is around efficiency and making things that were previously expensive to do or difficult to enforce perfectly to make them so cheap and so accessible that you can have things like perfect enforcement and so if you have a robot that's able to issue parking tickets anytime anyone spends over a second in the parking spot that really radically changes the way parking enforcement works and we then have to re-evaluate the laws and norms and so that's one area that I think AI is helpful in understanding kind of the scale and helping people understand particularly when policy makers don't have direct access for the things we're talking about. Some folks I've never used the technologies we're describing. The other place where I think it's useful is really around understanding kind of reach and so I've worked as a policy maker, I've worked in the various parts of government and for the press newspapers and I've also worked as a consultant on a television show on a sci-fi, not a sci-fi but kind of reality TV to do with surveillance and such and they're like the reach for that show even though it's kind of not realistic in some senses making sure people understand at least the nuances of the technology reaches so many more people and is so much more accessible than some white paper that the White House puts out or some Washington Post story that only 20 people read. So I think the reach there is really important and then finally I think the last thing to think about is how the reach, how kind of the use of technology in AI particularly changes how we think about people from a policy making perspective. So we talked about how it changes norms and it can be used as a kind of enforcement mechanism but also think about how it changes just how we work, how the nature of our interactions with one another change and this is like things around employment and labor laws and kind of entitlement to equity. So like we're seeing current and today's marketplace companies that have access to data and AI and technology to be able to amplify their workforce significantly more than any other company. So when we just look at stats like what certain tech companies are able to make per employee. So the stat I like to throw around is that Facebook makes in profit $800,000 per employee per year as compared to Google which is about a quarter of that and then the next company down like Ford is a tenth of Google so it's something like 40X what Facebook makes and so the amplification of using software and automation and how that changes equities is also really fascinating to me. So all three aspects I think is useful and using sci-fi to understand that is a useful tool. Perfect, well thanks for that Leighton and precursor to talking about work. So my name is Kristin Sharpe. I run the work workers and technology program here at New America and look in particular and primarily at how automation and artificial intelligence are changing both the structure of work and the kind of work that we do and what that's going to look like over the course of the next 10 or 15 years. One of the things that we've done in order to help, we do a lot of this work in communities around the country and organize and lead conversations between different stakeholders in a community about how work is changing as a result of new technologies and one of the things that we do in order to actively get people thinking about it, picturing what that looks like is run economic scenario planning exercises where people have to tell the story of what work in society, what their neighborhoods, what their jobs look like 10 to 15 years from now and from that we tried to sort of catalog all of the stories that people told and get a little bit of data from that about what kinds of things people are extrapolating what kinds of things they're projecting because of what they know about their own jobs right now, the companies they run, the kinds of civic organizations they work with and it's been a really fascinating thing to see some of the imagination go from sort of how people think about their jobs right now to what they see society looking like 15 years from now and the big takeaway from that is that it is really up to us right now in the policy making world to set out the kinds of parameters that will make that a good future versus a less good future. So it's been a fun project to start thinking about that. I'm Molly Steenson. I wear a number of hats at Carnegie Mellon. I'm a professor. I have a K&L Gates associate professorship in ethics and computational technologies. I'm the arts the research dean for the College of Fine Arts and I sit in the School of Design with an affiliate appointment in architecture so why me and why sci-fi. Among other things I am a historian of AI in architecture and design and I teach courses that explore what sci-fi does and then bring in people from Carnegie Mellon and beyond to talk about what sci-fi does in or I'm excusing what AI does in reality. So we take apart some of the cliches that we see we look at how these cliches have developed over time in fact the various taxonomies of sci-fi stories and sci-fi cliches that we've been discussing today are really helpful and we take into account the kind of work that is being talked about right here on this panel. Policy reports, scenarios literature, movies in place. Thank you all. So I want to start with this question of cliches and the way that science fiction works and Kevin mentioned at the beginning of this meeting Neil Stephenson's notion of science fiction as being able to save you a lot of time by putting people on the same page around a big idea that you can get organized around Asimov's robot work has been cited in thousands of engineering papers, the three laws of robotics, whether they're actually the right three laws or not. I've been very powerful in framing a lot of discussion and actual research and innovation. So stories and science fiction ideas tend to become these little compressed file formats and you can unfold them and get a whole world out of this idea but sometimes you get the cliche and you get the bad meme. So what is the interface like? Are there other layers between the science fiction writer and the policy makers? What are the other filters to pay attention to when we're thinking about how science fiction works in the world? I'm looking at you, Malka. Yeah, good, because I'm ready for that one. You put so much in there, you compressed a lot and so we're going to unfold that into a whole world too. Yeah, I do. And so I think that that's image to start with is a really interesting place to start because you do have science fiction that starts with some idea and ideally as a writer what we want to do is take that idea and build it into a believable world by really unfolding it into the detail, by thinking about how people behave, by thinking about unintended consequences and thinking about the extra things that don't have anything to do with the plot but give you a full world. And that's part of how we do our job well and it's very much in the sense of scenario planning and some of the other types of futurism that goes on in terms of really trying to think beyond this one idea and look at all the consequences of it. But at the same time that gets, often we see that that gets translated into a single sort of, you know, a catchphrase or a word that is simplified either for people who haven't read the books in the movie or for people who have but just remember that one key idea and sometimes that works well. But a lot of times it doesn't. And we have these sort of classic example now of things like Fight Club which have become to mean the opposite of what their nuanced and full version was intended to mean. So that's one part of it is that we have to be, you know, we are going to things are going to be simplified down. They are going to turn into a shortcut both in memory and in broader culture and we have to be aware of that and make sure that we're pushing things into full world as much as we can. The other thing that I want to pick up is another place where things tend to get simplified into memes and images and snapshots is the transference from what we do either in policy work and research or in literature and media into news stories. So a lot of what we've talked about here today, a lot of the examples that have come up have been cultural touchstones that have, you know, become famous and become images, you know, Skynet, Terminator, a lot of these images and we see them being attached over and over again to news stories and one thing that I've been noticing in my own news consumption is that I don't read a lot of news stories now. I see a lot of headlines and I see the line that people choose to put under the photo in the tweet or in the post on Facebook and I think I have an idea of what's going on but what we know is that those headlines and those pulled out first lines and those photos are not picked by the authors of the articles. They're picked by editors. There's no transparency. There's no accountability on this. And often those are the ones that are really pulling out the suggestive images, the scary images, the most clickbaity thing that they can find from that article and maybe not even find it in the article. And so we're seeing a lot of the sort of deeper thought things get transformed into clickbait and that's a real issue. The thing that your question about cliches made me think about is that I was surprised to learn having done probably 50 different storytelling sessions with people across the country in lots of different cities in different regions in the absence of a vision, a positive vision about what the future looks like, people's instinct is to just go dark. And so I think that a lot of what you're seeing in terms of people picking the visual or picking the caption for something is the human instinct to grab your attention by going dark. And the sort of funny illustration of that is of our 40 to 50 stories about this, about sort of what the future of work looks like and what people think of society going forward, probably 60% of the people named their story the Hunger Games. And it's a really revealing way to see how people are thinking about this, which is that they see the lack of economic mobility, they see societal questions about what is happening between the sort of split between the professional and the service related world in the work world and they go to that sort of dark place. And I think that putting out there some other kinds of policies and other kinds of visions in fact can help combat that, but that's not the answer. I do want to question, bring out, and I don't know the answer as to whether that is human instinct or whether that is really a product of the zeitgeist and a product of the different stories that we've been reading and seeing and listening to over the past a couple of decades. I want to just touch on, so a cliche and kind of over compression is a real thing, right? Like the moment the emoji movie came out I thought that was just the end. Like that's just the end. The beginning of the end. But one person's cliche is another person's profound mind blowing idea. The way I think of it is maybe hot sauce, which is that depending on your tolerance to hot sauce you might be more acclimated to have more nuances or more, but for some people just a tad is enough. And so if it's useful for invoking an idea and kind of triggering an idea in a frame then it's not cliche to the audience. So I would say the way you deal with that is the application of the thing, of like when you're depending on your audience you figure out the level of specificity. And sometimes the cliche is actually useful, like for me things like supporting the troops. Like everyone supports the troops and you can actually rally around concepts without getting into the nuances to build consensus and then bring people on board and then move it to a direction that you want in the policy world. So sometimes it's useful and sometimes it's really based on the application I think. One of the problems about AI is that there aren't really good ways to understand it. It's difficult to understand anything that happens within a black box. You've got inputs and outputs and a bunch of question marks, right? So it's why it's appealing to have the short hand of cliches. I'm going to blank on the person who referred to it in this. It's in my computer backstage, but metaphors we use them to talk of the this-ness of that, or the that-ness of a this. And I'm kind of curious about how we use sci-fi to get around the that-ness of the this and the this-ness of the that. Yeah, so a lot of really great ideas here. One thing that you've made me think is that cliches are like the autocomplete of the mind. People mention the Hunger Games because it's sort of accessible and there, whether it's in the Zeitgeist or we just all saw too many trailers or whatever, you know, at the time when you were doing the interviews. But then that becomes the frame, right? Then it becomes the title of the story and it carries all of this baggage with it. So I don't think we can get away from that. We're always going to use that kind of short hand, and so there's a certain kind of power and responsibility in the way that we deploy language. So I wanted to ask about that and talk a little bit more about methods. So one thing that I am thinking a lot about right now is this whole notion of imagination. And how do you inspire people, invite people to imagine the future? Because as you were saying, Kristen, most of us don't really think about it very much. And if you just throw people into the deep end, they'll cling to the cliches or it's going to be really dark. So you have to scaffold and give people some tools. And so there's an interesting dynamic should science fiction be playing this role of imagining the futures, imagining more diverse, more inclusive, more inspiring futures? Or should we be focusing more on inviting everybody to imagine the future? That was a trick question and you saw through it. One thing that I think is interesting is we all have different kinds of toolkits that we use. One thing that's useful from design is the fact that there are ways for people to get their hands on things and create futures or create science fiction, create design fictions in different kinds of ways. They could make future artifacts, they could brainstorm or role play a story. They could act out a service scenario. We have something called critical design as well which is a pretty sort of dark and gallery kind of version of design futures, but it's a way of creating future artifacts and putting them into narratives. And the fact is that this is something that anybody can do. We could do this at home, we could do this in our board rooms, we could do this in all kinds of places. I really like that. And I think one of the things that I'm really interested in seeing in this question of how do we get sci-fi, how do we use its potential in more places, is really to look at sort of more transversal and sort of cross cutting and not just bring in a sci-fi person, although I wish you would all bring in sci-fi people to the places where you work, but also how do we take seriously the work that they're doing and get that kind of thinking more broadly into other industries. And then similarly, I mean I as a sci-fi writer am very interested in knowing more about how other people do their work. I think we have a kind of specialization fetish and it's really useful to start expanding those different ways of thinking into board rooms and then vice versa. And yes, everywhere. I'm going to play just devil's advocate here. One of the challenges I think and maybe potentially one of the reasons why we see such dark sci-fi features is to essentially as a countervailing force to kind of innovation at large and so like coming from California, so much of innovation and startups and creation is having this utopian vision of what the thing you're building is against all odds raising funding, competing with competitors, implementing into market and so most of the creators of a lot of these technologies have a singular positive vision of their technology or their tool as deployed in society and therefore miss huge gaps in what could be the negative unexpected consequences on unexpected, unaccounted stakeholders or people not represented in the debate. And so I think one of the visions is to help remind folks that say like you envision this like home care robot as being the self-driving cars as being the end of mobility and it will take care of everyone's kids and everything, you know, kind of puppies and rainbows kind of thing, but maybe think about the displacement of work, displacement of people, the kind of liability impacts, like all of the negative externalities that are created that the culture of innovation and innovators have been kind of forced to forget, right, have been forced to just think about the upside. Certainly it's true as people's perception of Silicon Valley goes, but I think you can also flip it so that the negative stuff that people are talking about and thinking of and picturing is just the warning sign, right, it's the warning sign for what happens if you let something go unchecked and the flip side is we can check it and so thinking about it as a way to picture the guardrails rather than just a warning system, like I think Black Mirror, the television show Black Mirror is a really good example of that, of the things that take something to so negative extreme that it flags for you, like don't let it get this far, let's see how we can put the guardrails on to the good stuff. It also seems to be true that there's a lot more dystopian science fiction than there is, you know, constructivist Hope Punk, yeah, I may be biased in this question, but so I think there's a lurking question underneath here which is what is the difference between a good story and good policy, right, and I think one thing that maybe you're getting at here, Ashken, is that sometimes a good story is not good policy because stories are supposed to make us feel good or stories can often be intrinsically kind of self-centered, right, they're ego, they can be ego exercises and policy shouldn't work that way, so how do you, you know, what is the difference between those two modes of sort of organizing the universe and how do you translate between them? Well, I mean, I would say that, first of all, the story is a good story, hopefully it's avoiding this sort of ego and like we're disrupting convenience stores or whatever sort of angle that is, I mean usually if you're reading something like that it doesn't read as a good story, now if you own it with a hundred million dollar budget and lots of CGI and big stars, it may still seem like a good story even though it's really not a good story, so that's a separate problem, but I think comparing policy and stories is maybe not quite the right dichotomy that we want because stories really should be kind of opening the frame for how we think about policies and what we do want stories to have really, although not always and there are lots of people who would disagree with me on this, like dataists, but you know, usually you want a story that has kind of some kind of ending enclosure, you want something that feels satisfying that you feel like you've been on a journey and learned something or had an insight or you've gotten somewhere with the story and policy isn't necessarily like that, it doesn't necessarily wrap up, it doesn't necessarily have an ending, but what we do, what I hope that good stories do are they give us ideas, they give us empathy, they change our perspective and that should help for us to think about policy in a way that's a bit outside of our personal narrow framework or our political party narrow framework and give us a wider view and a different view. That's what I'm doing, is showing you how to actually execute an idea like a lot of times when you just sort of brainstorm about stuff and we see this in communities that are trying to develop methods for connecting people to new sources of income and stuff, like it's great to say, you know, why don't we have all the non-profit organizations work with the schools, like this will be amazing, but it's really hard to actually figure out the steps that have to happen in order to execute that. And so fiction and sci-fi in particular can sort of show you what the steps are and say, you know, like if you're thinking about a Martian civilization, you have to actually have an organization that is dealing with all of the things, all of the different countries that go together and how they work together and it's like the picturing of what the action steps are. And also the end goal, sometimes even though you talk about it as a great thing, what the actual success looks like isn't always clear unless you speculate about it, unless you imagine it. Yeah, we have a colleague who's now at another university who did a wide-ranging survey of people, decision makers around climate policy and asking them what is the ideal future look like? What are you working towards? And people just didn't have, you know, a vision or they had a number, you know, as like getting down to some level of parts per million, but it's actually really hard to come up with a concrete and actionable plan for where you're trying to get that has that end goal in mind rather than just sort of proceeding step by step. So how do we start to integrate, how do we do this more if we think that this is a good idea? Bringing science fiction into, so if you were saying we wanted more people in this room to invite more science fiction writers into some of the organizations that they're part of, how do you, like what are some of the methods and the steps to actually use this sort of toolkit of storytelling about the future to reframe or improve other kinds of decision-making processes? Yeah, I think that there's a range of things that can happen from, you know, bringing in writers and residents, which I actually think is a great idea for all kinds of organizations whether they're a profit or non-profit or research base, but having people that think a different way than the majority of the people in your organization is something everyone should consider budgeting for. And also bringing in some of the techniques. I mean we talked about scenario planning and that is not so dissimilar in some of its forms from what I do as a writer. When I'm brought in to sort of, to do kind of future stuff, like I was asked to go to the CIA and talk to them about the future of security in Africa and I mean I am not an expert on security or Africa but I thought it was really interesting that they were bringing me there to make up stories about it. And so, you know, what I think of myself when I think how am I going to do a good job at this? And when they ask me, you know, what I'm, how I do this, you know, my added benefit for them is that I am totally willing to make shit up. I have a lot of practice doing that and I'm really happy to just come up with ideas that don't have to necessarily be rooted in the reality of engineering or the reality of tech as long as I feel like I can root them in the reality of how I know people behave because for me that is the key factor that makes stories believable and accessible to people that make stories work. And so, that's what I do. I found writing science fiction particularly freeing because when I got stuck somewhere in a plot that I wanted something to happen, I could make up a technology that fixed that problem. Now, some people don't find that freeing in the same way because they get hung up on how will we make this technology work. And that is fine. That actually is great because it gets you a very different kind of writing and science fiction. But maybe for those people to really get into totally making shit up, they need to write fantasy. Or maybe they need to write in, you know, maybe they need a different kind of exercise that's based in a different kind of reality to free them up to feel like, okay, I'm going to think big and different about how the world could change. How do each of you give people permission to do this? Because that's I think part of what you're saying, that you are like a card carrying fabulous, right? And you are allowed, you're empowered, and you will show up and do this. I'm going to make those cards. You should totally do that. I would like one. So, but how do you do that? Because I found in the work that we do at the Center for Science and Imagination that it is, the invitation is really important. And there are different ways that you can do it. But what have you all encountered? I think that the more interactive you can make it, the better. And I don't think that everybody is sort of suited to be a writer and to conceptualize a creative story like that. But so a lot of times we've done things like flipping a card that shows some specific thing. And then you have to make up a story about that thing. Or putting a set of Legos on the table and saying you have to make the, you know, sort of community center of the future where people gather in different ways. And what does that look like? I like the artifact one that you mentioned earlier, thinking about an artifact of the future. But anything that you can get to sort of get people outside of their normal thinking and have to make them picture something else and then describe what the picture looks like is helpful. My thing is getting students to turn things upside down and not take them for granted. Take them, take technologies, turn them upside down, take apart movies, take apart books. And a lot of them have never thought about doing this before. If I'm teaching master's students, they've come in to do a master's in interaction design. They're going to go work at Google when they're done. And they haven't really thought about what actually makes everything go. So we look pretty critically at what runs behind. We look at the role of AI and society. In the AI and culture class we take apart movies. We take apart the Hunger Games actually. And Fahrenheit 451, the old version of course. And look at what the different kind of tropes are and then I also get them to do their own creative work. They have to do something interpretive. So I have philosophers doing paintings and I have HCI students doing plays and architecture students curating a fashion show. And all of these are just different ways around and through. But that's the method that I'd say is at hand for me being at a university. I think there's the kind of ideation function that this helps with. And there's also kind of a calibration function that it helps with. So on a number of occasions I and other experts are, I think Kevin does a set of security conferences that we attend. Kind of look at sci-fi and ideas around sci-fi and then really critique how close are we, how realistic is this near future, far future. And for people in the policy realm and people that don't have a lot of technology specificity, the difference between a NLP that autocompletes their search history and then something that you can have a conversational dialogue. They don't know what the distance between those two are. A great example is the self-driving car that we've been told would arrive last year and that we're told will arrive next year. But a lot of the experts will say given the policy considerations and all this kind of stuff, probably longer. Helping people understand how far away we are, I think is also another critical function of you're able to create a plot device that you can drop in. Policy makers like to drop in existing plot or they're like oh we can just grab the thing and just drop it in here and we'll make energy out of the sun. That was a crazy idea a while ago. And helping anchor those concepts to people and make them a reality I think is a critical use or application of this as well. Yeah. I hear the constraints can be really useful. Like you're a card or a simple exercise that invites people to step outside of their normal pattern not letting the perfect be the enemy of the good. We do that a lot in our projects. And I also really like what you said Molly about looking behind and that I think is also what you are getting at Ashken, really understanding the mechanics of the state of technology now is important. And I would add also the notion of looking around and this is part of the problem with the Silicon Valley. The business pitch story is all about the upside and you don't think about what else could happen in the unintended consequences. So finding ways to look for new perspectives on the work is really, really important. So what are some of the ways that, what are the moral hazards here? Like what can go wrong and what are the, we heard about Star Wars before what do we need to watch out for when we're thinking about how we do this kind of storytelling with a public purpose. So you touched on one which is the like the problem with reinforcement learning which is to if you're doing modeling of any kind of data driven system how to shake it up and invoke a new idea. Otherwise you kind of indicate to local maximum and you will just reinforce an idea that everyone knows you'll never break free of that. So I think that's one critical one. I think the other is thinking around how to, how to help people not be realistic but really help people not over, like over, be over confident in their vision, over sell it is something that, it's kind of like related to the business where you might have heard a lot of people say the same type of thing about an AI, it's going to be a killer robot and therefore you're like everyone says it's going to be a killer robot so it probably is. The other is that you are now the foremost expert and, and you know, futurist that comes into Africa to describe what the likely security threats are and they're like I've got this you guys, this is like, you know, and over sell or over, over be, over confident about your position. I think those would be the key to moral hazards because we are kind of just making making stuff up, right? I don't know if we're censored here. We are just kind of going on the fly and expressing our vision of the world, right? And so having some hubris around that I think is critical. Which policy makers don't really do. I think for me as a writer the cliches that you mentioned in the beginning are kind of a moral hazard because it's very easy to slip into shorthand. It's particularly easy around secondary characters where you just slip into describing them in the way that that function of character is always described in movies and in books and I think that's one of the clearer examples of where it happens but it can happen in a lot of other areas as well and that's very, very dangerous because that's how we end up with stereotypes and they're very easy to repeat and to pass on the ones that we've learned and as I said it's kind of easy to see in characters but the things that you're mentioning, the trope of the technology that never fails or the trope of the killer robot, all of these things are very easy to repeat and so what's really important for me as a writer is to try to make sure that I'm questioning anything that I write without thinking. To make sure that I'm trying to build things out of my own observations and experience and not out of things that I've read a million times because not only is that boring and poor narrative but it's also dangerous. You have to make sure that there are enough different kinds of people telling the stories that you have a variety of stories otherwise that's where you end up with the cliches. So let's open this up for questions from the audience and we talked about when you asked your question could you say one sci-fi that really influenced you a lot like one fiction, sci-fi, movie, film, whatever that was critical in your framing and shaping of this. No pressure. Let's bring you a microphone, sorry. A non-answer. I'm not a sci-fi fan. I love the topic today and thank you again for inviting me this morning and thank you for all your insightful research and sharing that with us. I'm returning the question how many women watch a sci-fi who watches sci-fi, is there an impact in that and how we're shaping AI policy through that to re-pitch the question? Walle is cute. Is the question about how many women watch sci-fi or how many women create sci-fi? I mean I can speak for myself. I grew up on Star Wars and Star Trek along with a lot of other things. I also grew up on Tolkien and the Black Stallion and the Wizard of Oz and all sorts of books that I never read of Green Gables and I knew that my brother wouldn't read of Green Gables although much later I learned out that he stole my Sweet Valley High books when I wasn't looking. He's admitted this on tape so I'm giving up a big secret. I always did and to me stories are stories. I know a lot of women who both write and consume sci-fi in different ways. I don't know the statistics but I think that if you look at the amount of conversation that goes on there are a lot of women who are very involved in this. If you look at the current award slate, for example, of the Hugo's they are strongly female and a lot of people are very upset about that. And I also know there's been some work done by Lisa Yazek who's at I think the University of Georgia. Georgia Tech, thank you. Wrote a book recently called The Future's Female where she looks at female science fiction writers of the middle of the 20th century the 40s, 50s, 60s who existed and were extremely popular and had both editors and readers of magazines asking for more of their work and have really disappeared from our popular mental image of the genre. So there have always been women who have been both writing and reading and watching sci-fi but we don't always pay attention to them. We don't always listen to them and we don't always accept them as forces in the genre. I can give you a ton of names to read and maybe you will find that you are a fan of sci-fi. Just not the kind of sci-fi you would encounter before but I will do that because we're short on time. I'll fly. Great question. Other questions? There's one right here. Yeah. I think it's a sci-fi movie but Logan's run. That one really scares me the older I get. But at any rate, one element that I misunderstand the format but is that sci-fi is also a deeply creative medium and so to what extent can you dictate to a sci-fi writer, a sci-fi artist that oh, you're scoring. You said AI was evil you need to stop that. I'm just wondering where that comes into this discussion that it's not just propaganda for some business model. Thank you. I can tell you as a sci-fi writer that sci-fi writers get let's say strongly suggested to all the time because I get requests from anthologies to write about specific topics or subjects all the time and then of course it's my choice whether I write about it or not and if the topic doesn't grab me and I write a terrible story about it they're probably not going to take it but I do get all these prompts constantly and also to get a story published you have to go through layers of agents and editors and publishers so while it's creative the people who are creating it are not the only people who decide what stories get out into the world and I think that is magnified hugely although it's not my area as much in the realm of TV and movies where as Chris was saying earlier the bigger the budget they have amazingly the less risk they want to take I mean we see why that makes sense but it's also you know for someone who has to kind of do a lot of their creative work on spec it's also kind of kind of amusing but we see that that you know there's a huge number of gatekeepers who think this is what people will pay money to see and they're often wrong and yet that doesn't always change the gatekeepers you know we see that when movies flop it often gets blamed on the female star or the female writer or the female director or you know sometimes the male star but rarely on the producers are the people who are making those decisions about which movies get made so yes and no you know we need to push I think the gatekeepers and we need to push the people who are providing media to take more risks and to go out and find different stories how you define sci-fi and maybe just broadening the definition of sci-fi a little bit is helpful like I was really pleased to hear somebody call Wally sci-fi which is like a kids movie right and that's an interest but if you think about that as evidence of how people will think about science in the future like that's a really interesting definition and it's broader than Star Wars or Star Track and that kind of stuff gets you a little bit more of a wide and I think sci-fi has become interestingly more mainstream and you see it permeating other genres in a funny way like the last season of the Parks and Rec the sitcom was just for no particular reason science-fictional they've moved like five years into the future I think there was one more question in the back yeah go ahead the book I've been enjoying most recently is the three body problem and the subsequent ones in the series and I think what's interesting about that is it's an entirely different cultural perspective of a speculative future and my question is kind of related to what you're just talking about especially given sort of the global nature of what we imagine governance of AI to be and given the high barrier to entry of sci-fi in general let alone across cultural context I kind of encourage more of those perspectives sharing whether it's across country cultures or even like within the US as you were saying they're traveling around the country I'm sure there's different perspectives there they're not just gender representation or community representation but just these different perspectives in this frame that I think we're saying is a helpful one to think about when we're thinking about the future of technology a lot of people can't actually work on AI in any substantial way or it's related to technologies they're not the crafters of algorithms but people are storytellers in a lot of different kinds of ways and so a way to begin to engage with critically and creatively with AI and related technologies and technological paradigms is exactly in some of the ways that I think we've been talking about I think that's a pretty good place to start Please join me in thanking our final panel That's going to take off I have a couple of closing remarks and I recognize that I'm the last thing standing between you and our reception So the first thing I'm going to do is share one of the we're figuring out this project as we go this AI policy futures thing and we're continuing to look for new directions to take it new partners and new ways to communicate so as part of our gathering today we came up with a bunch of ideas for original science fiction stories that we're going to be commissioning over the next year or so but another thing we did is we connected a bunch of interviews at an event we had at South by Southwest a few months ago and we have the raw materials for a podcast and now all we need is for somebody to give us some more money so we can make the podcast but we did make a teaser for the podcast which I'm going to play for you to entice you all to come up with brilliant ways for us to bring this thing to life so I'm hoping we can play this podcast teaser maybe my magical powers Over the next 10 years artificial intelligence or AI is going to radically change nearly every aspect of our lives from jobs to medicine to education and national security but really what does that mean It's not an alien that comes out of nowhere and is completely different and has grown in a separate evolutionary track. What is it? What is AI? We see it a lot in science fiction. My favorite science fiction AI is Samantha from her. Hello, I'm here. What do I call you? Do you have a name? Yes, Samantha. Data from Star Trek The Next Generation My positronic brain has several layers of shielding to protect me from power surges. It would be possible for you to remove my cranial unit and take it with you. You want me to take off your head Yes, sir. Marvin the robot from Hitchhiker's Guide to the Galaxy. Life, don't talk to me about life. But what about in real life? For some people, real AI doesn't even exist yet. For others it's in the stuff we use every day. My favorite real world AI is a thermostat. I guess it is a goal-seeking machine that can adjust its behavior to achieve the ideal temperature. So how can we understand AI? Will the machines take all of our jobs? Or will it make our lives easier? Will AI save us or will it be the end of humanity as we know it? Let's not confuse what is cool in sci-fi with what is good in the real world. We need to take a moment and try to envision what a good future looks like. People need to be thinking about how they interact with the algorithms in their life right now. But what actually interests me would be if a device of this kind could actually describe its own entity, I mean its own experience from its own point of view. I don't even ask that he be creative. I would just like him to be honest. This is Imagining Intelligence, a podcast from sci-fi house and Arizona State University Center for Science and the Imagination. Join us as we explore the future of AI and what it means for all of our tomorrows. So if you are interested in talking more about that or getting involved in the project in any other way, please feel free to chat with me or Kevin. And I want to close just very briefly my provocation to you since you've been promised a provocation is that when we talk about AI we get hung up on this word intelligence. We don't really know what intelligence is. We've never really known and all of our anxieties about AI are bound up in the way that this opens up the deep existential question of what it is to be human. And so the other related word is that word imagination. And everything that we're talking about here is how we can use our imagination to build a better pathway to chart a better course around all of the ways that intelligent machines and learning machines are already changing the world, already deeply implicated in the fabric of our everyday lives. And so if we're going to do anything about AI and developing a better approach to our conversations around AI and policy around AI we have to start with that word imagination. We have to take it on as a question for ourselves how do we imagine the future. A future where there's a new mirror, a new set of systems that reflect ourselves back to ourselves that pose the question to us that throw our anxieties about identity and belonging and personhood back at us in all sorts of different ways because we can't help but see ourselves in all of our tools and systems. So with that I will thank you once again for joining us and turn things over to Kevin. And I will thank you Ed and Andrew for the trailblazing work you all have done at the Center for Science and Imagination to help catalyze and solidify a growing community of practice that is taking science fiction seriously as a tool for thinking about the future of technology and the future of policy. Applied sci-fi you might call it or practical sci-fi. Everything we've been doing this event, this project, the sci-fi house at South by Southwest has been all about trying to build a community around that idea. And I want to thank first off all of the panelists and speakers for being a part of that community I want to thank you the audience for being a part of that community and joining us today. So thank you and please enjoy the reception. Thank you.