 So good evening. We'll go ahead and get started now since we're hitting that 630 plus Thank you all for coming this evening. This is the first in what will be five events Probably stated over the next year in celebration of the bicentennial of Mary Shelley's Frankenstein I'm gonna give you a little bit of background and then we'll turn our panel loose for you to enjoy their insights and their experience in acumen But briefly tonight's inspired by Mary Shelley's Frankenstein My guess is that a number of you may have seen a film But the odds of all of you having read the novel are probably slightly smaller So 200 years ago roughly on a cold and stormy cold summer Mary Shelley was part of a competition ghost stories She her then lover and soon-to-be husband Percy by Shelley George Gordon Lord Byron Her half-sister Claire Claremont and oh, yes Byron's doctor John Poladori all engaged in a Competition to write ghost stories The cut is only Poladori and Mary finished their stories Poladori's was a vampire roughly modeled on gee Lord Byron Mary's on the other hand she had trouble with she had a hard time starting it until she had a nightmare And I won't describe the nightmare, but the result was a five-page short story that When Byron read it he dismissed it just kind of fluff in their terms But Percy on the other hand found that there was something here There was something that really struck at him the imagery and he encouraged her to turn it into her first novel This said she was 18 at the time. She wrote this the novel came out initially anonymously in March 11th of 1818 and she didn't actually get credit on the cover until it was released in the second edition in 1823 at the time taking advantage of a great popularity in a Stage version of the story Any of you who are familiar with the films by the way It was the stage version that added the character of Fritz who later becomes Igor in terms of the humpbacked lab assistant But in any case tonight what we want to do is not look at the creation of the monster in terms or I'm sorry, I apologize the creature as a Combination of human remains and parts that are stitched together and reanimated by lightning It's instead the idea that Mary looked at this Well, I'm sorry. I've got notes. I'm going analog in old school for a conversation about artificial intelligence but We have the creature Mary cast this is so Victor Frankenstein in the novel There's not a great deal of description about how he does this he essentially goes mad He goes manic and is just driven to do a thing that he has figured out the magic element He knows how to create life. He knows how to animate life so He runs night after night stay in his lab Putting things together Running off and down to carnal houses the stray crypt whatever it takes to get the parts that he needs to create this creature This this construct of his But after the lightning comes after the creature opens its eyes Victor recoils in horror. He runs away oddly enough only as far as his bedroom where he collapses Into a fever eventually, but he leaves his creation at the time Not vocal not truly coherent innocent But slowly becoming aware The creature leaves Victor runs away the next morning the creature is left to Somehow get out of the city and into the woods, but we'll leave that aside But it wanders about it's innocent and initially generous. It meets a family that It starts to leave gifts. We won't talk about stalkers at this point, but It's only when he introduces himself to that family that Again, they were coil in horror. He's ugly. It's it's a response to his appearance and He's confused hurt. This is compounded later when he's again chased from a village at every instance when he encounters humanity It's repulsed. It drives him off So the creature understandably seeks revenge he Happened to have Victor's notebook his lab notes his journal how this happened Gave him an idea of how to go find Victor so what the creature does is Not stalk Victor, but stalk his friends and family Gradually isolating Victor leaving him as alone as the creature had been upon birth and Then there's a chase scene that ends in the Arctic, but The important note is that the turning of the creature to kill its creator is something that science fiction writer Isaac Asimov coined as the Frankenstein complex in one of his Lost little robot stories. That was the title in 1947 this theme has been replicated reproduced and has spawned of hundreds of variations over the past hundred years But as a mobs complex as he coined it he in particular started this as a warning not just About automation about robots about artificial intelligence it had a second warning in it and that was the Unthinking or even callous replacement of human labor with automation Very trenching in the 1950s and 60s is automated factories were coming in but where we take this tonight is through popular culture, this is spiraled into fear of Frankly almost all automation artificial intelligence mainframe computers take your pick whether it be the Skynet from the Terminator franchise still going strong after almost 20 years The terminators themselves cyborgs without a trace of humanity and controlled by programming you know programming an algorithm or Even the more human how 9,000 of 2001 that is driven mad by conflicting orders We can look more recently any of you who are watchers of HBO and are familiar with the series Westworld and The perversion in some respect or the lack of ethics involved in creating a human Android for purely pleasure for servicing humans part of the warning that Asimov gives us is To consider the ethical constraints that need to be involved in creation not just of life But that which may cause us to redefine life. How do we plan ahead? How many of you watched a automated uber drive by sometime in the last few days? Where does some of this innovation go? It's not all about accidentally slipping up and having commercially driven science create a genetically modified dinosaur Those are pretty blatant threats But how we change our habits how automated bots on the web well alter us alter our voting alter our perception of each other Many of us have probably gotten into arguments with an automated entity at one point or another programmed by humans for now but with something called the Singularity that Werner Vinge posited back in the 1990s at some point there's the possibility of artificial intelligence growing so Sophisticated that it bootstraps itself and designs that which we don't even comprehend anymore if we're lucky This leaves us behind if we're unlucky. It steps on us like ants But on that note I'd like to introduce you for your panel tonight We have going from my left and moving over David Danks who is the sorry L L Thurston Professor of philosophy and psychology and the head of the Department of Philosophy of the Dietrich College of Humanities and Social Sciences We have Barry Locala who is the teaching professor and director of the undergraduate physics laboratories in the Department of Physics here And the Mellon school or sorry Mellon College of Science Molly Wright Steenson joins us from the school of design. She's associate professor and director of the Doctor of Design program and not least and last Jeff Bingham. He joins us from the Let's see the human computer Interaction Institute and the school of computer science He too is an associate professor. Thank you very much for joining us this evening and Thank the rest of you. I'd also like at this moment to thank the dean of the libraries Keith Webster and the Illuminae Association for Letting us put together this event for you this evening So let's start The first question I have for you Let's try it with something a low ball Barry in terms of physics in terms of materials engineering Where do we stand in terms of creating an artificial intelligence? that could launch nuclear armageddon like Skynet or Perhaps take over all of the satellites and unleash Just a modern technology blackout low ball Okay, I'm gonna deflect the question by saying that is not a material science question Nor is it a physics question that that's really a Machine learning artificial intelligence kind of the thing. So it's levels above physics and Material science if you can get a system that is Smart enough to be aware of its surroundings and learn all the intricacies of nuclear weapons and International relations. That's not physics. That's that's way above physics And are we there yet? I hope not I'd accept that I've been I was aiming this more toward well with the the current information retrieval capacity and processors maybe even just information storage, you know Are we looking at technology advancing enough that we can develop? I mean this requires algorithms This requires the work of all of you coming together for something to be created that could do this Not even going into the political relations. Odds are all needs to do is read Facebook to launch a missile or two But what would you say in terms of human cognition then or approaching human cognition? David do we are we gaining the the beginnings of an understanding of what it would take to simulate human intelligence much less create one I Think that that's a that's a tough question And it's it's tough because what's happened in AI and machine learning over the last 30 40 years Actually kind of mirrors what psychology has done over the last 120 which is there's been a turn from trying to explain The whole intelligence or construct what in the 70s and 80s was called artificial general artificial general intelligence AGI And in the in the 90s in 2000s AI sort of took certain steps forward precisely by No longer trying to do the general thing by stepping back and saying let's do take one very clear very focused task and Optimize on that so you get alpha go created by Google DeepMind Which is unbelievable at playing go and at the end you know can beat the the best human player in the world And at the end of the match the human player gets up goes home to his wife cooks dinner and does all these things that you know Whereas alpha go gets turned off And so this sort of divide and conquer approach that I think we've taken in AI over the last 20 years Mirrors what's happened over the last hundred years in psychology and cognitive science, which is dividing conquer So let's understand vision just just understand how a human I Can recognize or human I plus visual cortex recognizes a chair Which is already an unbelievably difficult task, but we can start to figure that out and Speaking as both somebody who does AI and somebody who does cognitive science the hope in both Disciplines is that at some point we're going to be able to just kind of glue everything back together Right that will get general intelligence by gluing together all of these local Understandings and I think one of the things we've learned in the last hundred years in psychology is that's unbelievably hard So I think sometimes it's easy to get misled by success at things like go or even Occasionally driving for about five minutes on a sunny day on a nice clean Pittsburgh Street and losing sight of All of the other kinds of tasks that we can do as humans So I think the worry is is much less about having AI that that reaches human level intelligence And I would suggest it's much more about us mistakenly thinking That AI is that are not nearly human level actually are and we delegate authority and power to them that perhaps We shouldn't given that they aren't yet at our level Is there anything the rest of you would like to add to that? Could I? One of the things I think that brings to mind is the notion of modeling AI is always about the models that we have in Frankenstein is About someone modeling an intelligence and even needing to make it eight feet tall in order Because that was that was the level of fidelity that was that was the level of fine grain which is to say quite coarse That was available at that time to the creator, right? And Over time our notions of these models change the funding for certain models Works for a while and then it doesn't micro worlds being being one So certain a certain kind of modeling works until about 1974 and then that's no longer funded, right? But I think that that's an important question here is what are these models that? We build and how do we see them work and then eventually expire only to be replaced by something else Yeah, I guess that the only thing I would add is it's kind of tempting and interesting to think of these AIs that are these separate all-encompassing general Intelligences and that's someday in the future and it's really fascinating to imagine what that might be like I think what we're dealing with right now and what is really worrisome to me is is this idea that we're already kind of dealing and working with these AI systems Already as individuals as as groups, you know, we meant you mentioned social media, right? I mean, there's all kinds of algorithms kind of AI not always AI That are influencing our behavior there but they're influencing our behavior and all kinds of Arguably even more important things right so to the extent that an insurance company uses an algorithm to help inform the Analyst that makes decisions the extent to the extent that a hiring manager does the same thing You know so Equifax can't even keep our data, but somehow they have these secret models that are Informing who has credit who can get credit or doesn't and as somebody who is affected by these these decisions that these models are making We don't necessarily know to what extent the model influence that you know the extent that the human Influence that and probably at the organization level It's not always clear probably to the people who are making the decisions how much influence Machine learning statistics had in the decision process versus the humans or how much you would want each of those parties to have So let me extrapolate from this if we if we limit it to the idea of not one overwhelming artificial intelligence not something like an Ultron or Maybe something closer to a Skynet where okay, it was Modeled on a mainframe computer in the 80s and it had a very distinct mission Control nuclear weapons prevent the Soviets from bombing react So had a more constrained set of responses and actions that were available If we look at something along these lines as these more limited systems or expert systems Then how much of this? Let me restate that If we look at it as models, how do we begin to approach? looking at the interrelation of models, so You mentioned the eye recognizing a chair so it goes through a process first that's geometric and then Analogy I'm assuming in terms of what you know what can be used or what can be done with these surfaces and Runs down through probabilities to figure out. Okay. This most likely is used for that I Don't even know how to ask this The design is where I'm thinking with this first. I'll take it. I'll take a shot if that's okay I mean, I think I think there's a couple of different Issues that come up the first is a kind of complexity issue, which is given that we have different modules for different functions What happens when you start to put them together? How do you have some ability to predict which humans are notoriously bad at predicting these kinds of interactions? So how can we do so in a way that actually gives us some confidence about the reliability of The of the system as a whole But I would suggest there's also I mean there's a different sense in which and I love the word models that that Mali used That that I think sometimes we also need to think about what kind of model we want in the sense of Sorry to use a piece of philosophical jargon But descriptive versus normative that is to say do we want models of how the world is? Or models of how the world should be right so the recent case of Biased software for predicting recidivism rate Especially in Wisconsin that is the the right likelihood that you would be real that somebody would be re-arrested which had a very Significant the race of the defendant of the of the person coming up for parole turned out to be a very significant factor in terms of what the system said was the likelihood of recidivism of being re-arrested and in fact, it turns out that Descriptively the algorithm was probably right if you are African-American in the United States You are more likely to be re-arrested than if you are white But we might also plausibly think that's due to structural racism That's due to other kinds of factors and that the world we want to live in is One in which we judge people people's likelihood of being of reoffending on the basis of Non-racial factors right, but that's to distinguish that's and so we sometimes talk about things like algorithmic bias or model bias And part of the problem is that bias is a complicated thing. It depends on a standard and that's a human activity Right, so this comes back to Jeff's point about it's not just the machines by themselves It's also the human machine, right? So it's not just modules within the AI It's the module of the human and the module of the AI and the model of what data is collected absolutely, which makes for An interesting question who makes those decisions so in this particular case in Milwaukee Whoever was making those decisions. It was going to have a very profound implication, right? But I would argue that it's it's the role of designers or human-centered designers Also to take part in the decisions of what data is collected and how And how That data is parsed. I find I've been trying to at least teach our master's students about some of the considerations On these fronts because I think that those are the worlds our students are going to be graduating into and working in Given how many students that we have go into into tech or design and tech so How do we do that, right? How can we figure out what those very human factors are and and build them into the data collection and algorithm design process when Many people don't actually have access to building and designing and curating algorithms So how do we get the right people on board is one question that I have I mean I think to play off of that one thing that I I have seen play out and over a number of instances of data bias is That the initial reaction from kind of technology folks often is oh, let's just remove the bias from the data Because they hear examples like this where they're like, oh, well, of course, that's a bad idea We should remove race as one of the features that the model is learning from And maybe you can make some progress there. It's probably a good first step, right? But it's actually really difficult to truly remove such bias the machine learning algorithms that we have the reason Why they are so effective is because they can pick up patterns that humans don't see and maybe that pattern that they learn ends up Recreating a variable that you took out. So maybe they're learning something. That's a proxy for say race Maybe it ends up learning some bias that we didn't even know the data has or maybe some weird bias that like the machine Just kind of invent it right and so kind of to get back to Molly's point here You know people should be involved and I think I've Increasingly believe that the only way that people can truly be involved if is if the models are open and to some degree The data is open now There's problems with opening data and maybe it's more like an auditing situation where people can kind of look at the data Whatever but without doing that it seems almost impossible to make sure that these AI systems that are affecting us in so many different ways have any chance of of Reflecting our collective values right because they're being developed mostly by corporations who have different incentives I'm not even saying bad incentives just different incentives than maybe we would want as a society And I'm sorry am I allowed to I don't want to be super your your position. This AI is you know getting in the way I mean, I guess well, you know and going with that and here actually I wanted to ask a question of Barry if I'm allowed to Because I think one of the challenges that I find when I talk to people about these issues is precisely this sort of widespread belief that Algorithms are objective data is objective and so maybe we collected it, you know, maybe we should have looked a little bit more or We needed to you know, just use just the right algorithm, but it's objective You can't argue with numbers this sort of attitude and I'm wondering you've done a lot of work using science fiction to teach And I'm wondering can we use science fiction about these kinds of whether it's the science fiction of Mary Shelley or the science fiction of today? to help people better understand that Any kind of algorithm computational technology artificial technology it is value-laden. It's not objective Even if the creator thought it was Yeah, that's a really good point and I think one of the key Pieces of the Frankenstein story in your synopsis that you missed Rick was the Interaction between the monster the creature. Let's call it the creature and Frankenstein Very shortly after it was brought to life the human element Frankenstein didn't just flee the building and leave the creature He fled as you said to his bedroom But the creature found him the creature came parted the curtains of his bed and Smiled at him the creature was longing for a relationship with the creator but this particular creator couldn't handle that level of responsibility of Interacting with something that was intended to be beautiful But ended up badly So that was the point at which Frankenstein leaves the building but the creature wanted to interact with his creation and The the human element that the bias in the data We've got to consider things of that sort The valued ladenness there is Something that has to be recognized To make it one one concrete example recently I was working with colleagues at Microsoft Research and we were trying to make a system that would label Images on social media so that people who were blind Would know what were in those images, right? And so there's all this great computer vision technology out there that can kind of do this But they put it out there and they found that it often had errors, right? It's not perfect yet but they found that the errors were kind of Weird so one example they found it was a picture of Hillary Clinton at a rally last year and the Description that the algorithm produced for it was a man doing a trick on a skateboard right it's kind of weird right but What was even kind of more worrisome about that is that they then kind of brought in participants and they had participants look at these Look at these images or read the descriptions in context And you know the participants found all kinds of reasons why you know that description would make sense why on a Hillary Clinton webpage there would be a Image that's a young man on a skateboard and it was something like you know that the descriptions were like well You know she's trying to you know present herself as kind of young and hip and so there's a young man on a skateboard doing a trick or something You know anyway So it turns out the reason why that kind of error happened in that particular case But often happened was that the creators of this system were really into sports in particular Skateboards and so a lot of the data ended up being Skateboards right and so this isn't race or like sex or it's other things that we think of as common like sources of terrible society bias right but It's somebody who likes sports and therefore influencing the data and screwing up and making people think that there's an image It's not of the thing that it was crazy. Well, that reminds me of the word to Vec Corpus, which is it's derived from news, right? You would think that if it's derived from a bunch of Google news that news is objective And and it should be a good pairing for a text analysis or a good corpus for text analysis However, the word pairings ended up being something like man is to doctor as woman is to nurse so Paris is to France as Tokyo is to Japan, but the jet there were some really strong gender biases in the data and There is a group. I think it's Microsoft Research New England One of Nancy Bame's students ended up working on this at a couple of institutions in Boston And they set up a crowdsourcing platform to undo or to basically test some of these biased pairings and if they found that Enough people on Mechanical Turk found the pairing to be biased They would correct it and feed it back into the corpus So they had a way to to look at this but other ways that this has played out recently a particular data set was made out of restaurant reviews and started Taking anything that was a Mexican restaurant and giving it a negative reading because the corpus it was trained on looked at sentiment in toward Mexico Coming out of the news and build the wall and Therefore Mexican restaurants were viewed as bad. So it's it's ridiculous and it's almost funny But only almost right so following that but not with Mexico What you're with the different biases, so let's try this in another way What about well, we have a bias in terms of how we're programming how we're designing artificial intelligence Any form of programming the the tasks it's being given the models are again human defined So we have aesthetic bias already built in if we don't start going with ethnic political Gender, which is also political What I'm wondering about is what do we have in terms of oversight Science fiction again and again returns to this in biotechnology in technology Even for that matter xenobiology looking at first contact with whatever other intelligences are out there Humanities the arts drama are exploring these questions often in very large broad over-stated terms go back to you know Tyrannosaurus rex and such But professionally How are we addressing this this ethical need? Some aren't aware of it. Some works already being done in several areas is the work being coordinated in any way Let's start with that. There is work How do you become aware of it in your fields? How do you contribute to it? Okay, I guess Somebody's got to go I guess So no, I you know that it's a hard problem and it's a hard problem in part for for reason that Jeff mentioned earlier Which is that so much of this is done behind closed doors All right So if you want to know what is happening with you know uber and the automated and the autonomous vehicles Be prepared to sign a massive number of nondisclosure agreements so you can never talk about what you learned At the same time I think you know, there's a challenge which is Oftentimes the people who are developing the technology when I talk with them the response is to say look I'm just building the technology Somebody else is responsible for telling me what would be ethical or not for the system to do Somebody else is responsible for saying we went too far with this My job is just to build the cool technology Which you know me you can push back against that all technology already has values in it Even the uber cars have to make a decision was made by a designer Should they drive as safely as possible or should they follow the law in every opportunity because those two goals come into conflict You can't always do both right so they've already introduced values into it But getting them to see that and so that is often done through working with Companies directly working with the educational system So trying to to teach the next generation of designers the next generation of technologists To be aware of the human impacts and the human dimension of the technology they're building It means going out to policy makers and trying to get them to understand There's a very good chance that the US Congress is going to pass a bill that Basically prohibits states from regulating autonomous vehicles Basically says we're going to set the rules at the federal level and the states are not allowed to override us and certainly municipalities can't And you can guess what kind of rules are probably going to be put in place at the federal level. They're very permissive and Would put us all in to a certain kind of well We're all already in an experiment that we didn't sign up for Here in Pittsburgh, but it would do it on a much larger scale And so, you know, there's pushback there. There are people trying to lobby Congress There are people trying to inform policy makers and I think It's it's difficult because so much of it happens behind closed doors But you you know you look for every opportunity you take every opportunity and Say well, that's part of what we should be doing as academics right part of our job is to help inform and educate many different areas of society So so I'm a runner and speaking of the uber example for the longest time I've been trying to like get up the nerve to just get hit by one of these uber cars Because I thought like that's the way that I could like retire early But then I was talking to one of my friends who works at uber and he said well You know that would never work because there's all these sensors and so that was pretty stupid but I think that's kind of an example of kind of us getting distracted by These problems that we can't understand from the outside so a more real example is For the longest time and even today I feel like people are obsessed with How the self-driving cars are going to solve the trolley problem or variant variants of the trolley problem This is basically, you know, you can think of in various ways But it's basically, you know, do you kill the pedestrian or do you kill the person inside the car? And it's pretty much irrelevant to the real problems and the real dangers of these cars it came from philosophy And I'm so sorry Profession you know first Barry told me that like machine learning and computer science like way about physics I loved and now you're telling me you apologize for philosophy. This is like the best panel Were you adding something Molly? So I've been collecting cliches and this is making me I also collect sci-fi cliches of AI And at the beginning of a few talks I've given lately. I run through headlines that say AI is the new UI AI is the new Electricity data is the new oil You know, it's one thing after another and everything from you know, the robot sidekick number five in the movie Short circuit, which tells you how old I am I'm from the 70s in the 80s To Hal of course and and everything else, but I begin to wonder What it might be and maybe I could ask Barry what it might be to develop Other visions of what AI looks like could we please get beyond the fembot? You know like in movies like ex machina, which was you know a beautiful and jarring film that I have a lot of problems with Are there are there ways that we can start coming up with different imaginaries of this stuff? So that we I mean or even different cliches Can we invent them so we can begin as a society to think differently about? What these possibilities are what the restrictions might be what the problems might be? If I understand what you're saying, I think there already are such things and a couple of good examples are Bicentennial man played the lead character was played by Robin Williams and Commander data from Star Trek the next generation. These are examples of AI Entities that long to be human. It's sort of like the Pinocchio story So it's a different take not on the dangers of AI going badly wrong But on the AI wanting to be like its creators the measure of the man the measure of the man Yeah, yeah famous next generation episode sure sure Where all of data's hardware? Specifications are given and we can compare them to what the human brain is really like yeah, and are we there yet? And the answer is data wasn't even there yet in some regards, but we are there Now which I think was going back to what you were asking me originally and it kind of took me by Surprise. Yeah, the hardware is there the processor speed is there and The storage capacity is kind of irrelevant Because you don't any longer need to create a machine whose storage capacity in here matches the human brain All you need is a Watson who is wirelessly connected to the internet and you've got all of that information Without having to store it locally So I think we're there What we have to worry about is whether a machine like that can ever do the thing that you said originally Launch a nuclear weapon without our authorization or something So you lead me to the next question actually that starts with both of you Part of this well part of the question would be How are we limiting ourselves by trying to reimagine ourselves in technology? So the fembot or the male bot? You know I'm going in with that but instead you just triggered an idea that I see in terms of this the idea of technologically augmenting society this goes beyond the uber it's How are we limiting ourselves? How are we coming to depend on the internet not as a form of intelligence, but as a mediated form of information and data if we if we create a data or Any Android synthetic life form that is itself built into the cloud network That will work on earth But where are its limits or for us any of us who wander into a Faraday cage? Well, you know, we're fine because it just shuts off our iPhone. Okay, maybe we're less fine depends on who you are But any artificial intelligence that's tied into that pervasive network will lose its brain So is this part of the planning? Is there a conscious? Can we consciously build in a plan for this an ethical consciousness for this that we aren't? Giving away the farm, but we're also not becoming too dependent. I Kind of want to look back before I look forward And this speaks more to the first part of what you were asking in Something like 1849 1850 Dionysius Lardner wrote about the annihilation of time and space that railroads and the telegraph were bringing, you know the two original piggyback to technologies of communication and physical physical transport And I think we keep talking about that annihilation of time and space and What it what it means for how we process and and get intelligence? He actually talks about intelligence coming in from the or bringing intelligence to the barbaric countryside Plus what a great name Dionysius Lardner. Can we just pause on that for a second? So this was a concern then and we even repeat those terms now We talk about the notion of human computer symbiosis, but it was JCR lick lighter who coined that term in 1960 We use the Turing test, which is from 1950 And our notions, you know, you could look at Simon and Newell Saying that their general problem solver in 1958 was going to have completely modeled the human brain by the early 1960s, I almost wonder sometimes if we come up with these models based on what we have at hand at the time and It always pushes out there just a little bit further I Guess when there is are we all going to die because of a computer leashed attack or the singularity or? Superintelligence Alanic Bostrom That's a different kind of a nihilistic question, but I wonder if we've got it now and deep learning is a new model and then we're going to have a new model and Then a new model and it's going to be always just in front of us Like we're always already never quite there Or I'm totally wrong and we're in deep trouble so one of the one of the things that that we find in psychology is a phenomenon that Many of us are used to perhaps which is The more you know the the more you know you don't know right that as you become more expert in something you actually Come to realize how little you know about something and it seems like that's often the case with these Especially when we think about the nature of intelligence that we as we're developing better AI systems We're learning more actually about what counts as an intelligent system And what do you need to be able to do and how much is being done by the human developer of the system? Such that we didn't notice it was being smuggled in to the system And I think that that's often the case is that as we keep advancing the frontier We we've come to realize just how much further away the goal really is I think that's absolutely right I think we talk about you know, when will the AI you know launch the nuclear Weapons, I mean to some degree right like to the extent that AI is Influencing and changing how we interact with all kinds of different sociotechnical systems like social media, etc AI is already having a huge impact on our politics and our politics could lead to nuclear war. I mean this is Happening and it's happening in these little incremental pieces. You say, you know, we're never it seems like we're never there Well, you never you're never there until you get there You know And so it's hard to predict sometimes right? So I'm just wondering how about that Well, I think one issue is that it certainly technology has as I would argue has accelerated this trend that dates back to I don't know at least Gutenberg probably soon earlier of Moving things from the head out into the world Right, so yes, we all joke about how you go into a Faraday cage And you know, it's like you've had a brain lesion because suddenly you don't remember anything On the other hand, that's what people said about diaries and notebooks, right? What would happen if you lost this notebook? And so, you know, I think one one thing is this gradual and it has been gradual It's been by little pieces, but we've been able to move certain aspects of our minds out into the world and Not always without we haven't always thought about the consequences of that. What are the skills? We've lost as humans cognitive skills. I especially look at this with my Sorry for those of you for whom this might be your age bracket, but I look at my 14-year-old daughter Who has a very different set of cognitive skills than I had at age 14, which is back in the days of short circuit We're actually, you know a little before that And so it was, you know, it is really striking to see the impacts that are already happening and in an accelerated time frame You know, railroads and telegraph took a long time to build out Google has taken 10 years, 15 And so you're seeing changes happen within generations, not just across them I think that's a really fascinating challenge that we have that we didn't have before Is do we have the time to acclimate socially and individually to the changes that are happening? So recently I read about Facebook's response to some people saying that maybe they should take some more responsibility for the various things that their platform is enabling And I think I think that their response, which I will which I will try to be fair to which was essentially they they are trying to be neutral They Realize that sometimes the platform can amplify and maybe accelerate kind of build on this sort of comments But you know, they're trying to be neutral, but I think that that sometimes we we don't realize that amplifying or accelerating is a type of action that has influence and To the extent that we are choosing to amplify or accelerate We're not it's not as simple as saying that these were always human characteristics And now we're just amplified or accelerated. No the amplified accelerated version is different, right? And a lot of different ways and maybe some good ways, but it's different, right? And I don't know if that's always clear So let me bring it back to science fiction since this may be our last question before we open it to Q&A to the crowd Science fiction most writers most science fiction writers will say that they're not futurists They're not trying to foretell the future. They're instead talking about contemporary problems typically social problems to me this is an issue of engineering, but it's also Design and for many of you as faculty. It's how you present problems to your students or actually teach them How to recognize problems how to how to solve them? So with science fiction talking about today Turning this back to this idea of how do we forecast what we're doing to present a model for you? Are we looking at a change in cities where we have dedicated lanes for? Ubers to cut down on the possibility of accidents. Are we then looking at the possibility of dedicated lanes for the? Answer the descendant of the Roomba that you send out to go get your groceries because you're not going to pay for the uber or whatever I mean, where are we looking at a change in urban habitat just to make it small focus for now? That is not just today, but is looking ahead at how technology is going to be evolving in the near term Can you think of any books science fiction that is doing this right now? That's looking at that near-cast without it being a cyberpunk dystopia for instance That's a little hard to say Um Writers of science fiction Imagine what the future might be like But are they consciously predicting what the future will be? Probably not I think a lot of stuff that was foreseen in science fiction that now is I Don't think that was the sci-fi writers Deliberately consciously saying this is what's going to happen It is just as likely that it was Scientists and technologists who were inspired by that thing who asked the question Wouldn't it be cool if I could actually do that and then they do it? So it's more an inspiration of things That could be I'm not sure if I answered your question you do you do bring it back to the what we're looking at is Science fiction to humanities become an inspiration. It talks about how we live or what it is to be human and to live as humans So then how are we over the rest of you applying this when it comes to artificial intelligence or even just that is the design of human? environments that are going to require automation and various autonomous tools Well, I think one thing is is using it to try and figure out the kind of future we want I think Sometimes it's easy to think technology just kind of happens and then we have to react to it and Trying to think about to the extent that we can proactively influence technology whether Socially or as particular people building technology or as consumers who help shape the decisions about what people are selling In terms of technology, I think trying to think about what is the future. We want there was um I don't know if any of you saw I think it was about nine days ago I think it was a week ago Sunday in the New York Times There was an op-ed from somebody complaining about this horrific future because she let her two-year-old play with Alexa and And it was just horrible because she introduced her two-year-old to Alexa and showed her two-year-old how to ask Alexa questions You know the and then within three days the the two-year-old is asking Alexa questions Like should I wear the pink dress or the striped dress and oh, no, I've lost control I mean, this is the whole tenor of the op-ed piece and I sat there reading it going. No, you teach your child how to use it Right, that's what parenting is Parenting is you don't just say oh, what could I possibly have done differently? Well, what you could have done differently is taught your child how to responsibly use a technology Sorry, that was my get on a soapbox because it was so infuriating to read because it was positioned as this look we're starting on the slippery slope to a technological dystopia and Instead it seemed to me. This is a case if you'd read any science fiction You'd look and go oh, we need to be educated We need to teach the next generation how to use these technologies in order to proactively prevent this dystopia Because we are actors too we have agency which I think we sometimes forget but which science fiction often some of the best Science fiction reveals all the little steps along the way where we could have done something differently Right, we could have stopped the dystopia if only and I would hope that we could use that as inspiration to realize Maybe right now there are things we should be doing to stop those little steps that you know someday our Descendants hopefully many many generations in the future are looking back saying boy if only a good illustration of that Might be the recent movie the circle Emma Watson is the lead character and it's all about a Technological corporation that is driving for total transparency total revealing every detail of your life to everybody and focus on the Community rather than the individual and Emma Watson dares to be an individual she wants to do something by herself Even though there's incredible peer pressure to why would you want to do that by yourself when you have all these Colleagues who like to do that same thing. Why don't you want to do it with them? Humans need to be alone Occasionally and not totally participating in community constantly totally engaged through social media and everything so she she dares to be an individual and Refuses to be absorbed into this connected collective So that's a perhaps an example of a step that can be taken to Prevent assimilation into the technological future that some people might prefer Unless you have something more to add. I'll go ahead and turn it open to our audience for questions at this point so If you would I can bring the mic to you So computers have already been trying to start nuclear war since 1979 All right, they they keep giving us false alarms and saying that hundreds of missiles are currently coming here But we always have had a person that that then has to give confirmation and and then that the person generally has a funny feeling in their gut or Something and they double check and then they find out. Oh, I see that wasn't actual missiles That was a simulation program that was playing on the main machine Yeah, that's that's all is that yes, yes computers are trying to start nuclear war But what you do is it's all about what you tell the computers to do and how much power you give them Going back to science fiction Person of interest raised a question for an artificial general intelligence than instead of Adapting a model to focus on education autonomy has that ever gained any traction in actual the real-world academia You mean the idea that that if you really want to have an intelligent system what you do is in some sense Create a very don't build very much in but give it the ability to learn and then give it access to enormous amounts of data exactly instead of of Modifying the data that comes in teach the system how to view data. How do you define a problem space how to act? In various ways that's been happening also since the 50s or 60s There's a little book called how to solve it that was written in 1946 by George Polia. It's where a definition of heuristics was put in place Heuristics is a rule of thumb and they're provisional like you probably have a bunch of heuristics for making a sandwich or for Getting your car parked and getting to work, you know a set of things and sometimes you throw away the heuristics They're provisional, you know you throw them away when they're you've got better ones, right? So we've been doing that with computers and problem-solving for a really long time We have the ability to do that better and faster now. There are gigantic data sets and There's even the possibility for algorithms to teach algorithms how to make algorithms, right? So we see that speed up and take place But I point out how to solve it because it's a wonderful short little book about solving math problems that actually inspired that very notion And I mean there is a there is a Technique in machine learning reinforcement learning where the idea is that you do basically teach an algorithm how to solve a problem by It kind of flailing around aimlessly and you giving it a positive reward when it does something good And when it a negative reward when it does something bad I think the challenge for really doing this in the in the I mean I think the the idea is you could start off with like a little machine It's like a little baby or something and you could like you know teach it how to be human I think it's basically we don't have yet that machine and there's a lot a lot that would have to go into that, right? I mean our brains we do learn a lot from when we're you know, a baby to when we're adults at least some of us but But but that brain had been constructed over, you know Millions of years via evolution to even be pot for it to be even be possible to do that And that's where I think we've not yet figured figured it out, right? We've not yet figured out how to even construct the machinery to allow the the machine to learn in such a diverse Types of tasks, right? You could do it for a simple thing and we do that with that's reinforcement learning Like that's kind of going back to these really narrow problems. It works but if you really wanted to do it like across both the Go and making dinner Super hard for lots of reasons. Yeah, the yeah how stood for heuristic algorithm and logic So right now we've got the algorithm We don't have a machine that can decide which one of the three to use that that certainly is the case I think also there's a challenge of knowing Not just which to use but when it's okay to use one rather than another and what you're even trying to do in a situation, right? So how do you decide what matters right now? and this goes back to the the comment about about Computers and who a human making the decision it is in part understanding the context and understanding That you might have multiple goals and you can't boil it down to just one in a moment And so you as a human are trying to figure out how can I best achieve all the different ends? I have which change from moment to moment and context to context right now We aren't good at that. I mean that's one of the things that I think is maybe the One of the huge problems that we face in AI and machine learning is how to have systems that can and I'll use a loaded word but intelligently learn and change their own values How do they figure out what matters? Well, you know, we know how we do it with a certain kind of intelligence namely kids and it takes a really long time to teach them Values to teach them ethical standards the developmental psychologist Alison Gopnik was giving a talk at an AI conference and there were 2,000 machine learning and robotics people in the audience and she said, you know, I don't understand all of you keep complaining About how hard it is to build artificial intelligence and I think his intelligence is easy You have fun wait 18 years and there you go And and I think you know sometime, you know that that that's another avenue that I think is starting to be better appreciated now Is the importance of looking back at what we know about psychology and neuroscience to help inform the creation of sensible Adaptable AI and hopefully thereby ethical AI we hope Let's let's assume for the moment that super intelligence is possible 50 years from now 100 years from now Let's just ignore the technological questions. What should Given all you know about humans. What kind of life should humans have? I find myself living in the house of the future and it was built in 1890 and it's in Lawrenceville and I find that I live in the city of the future maybe more so than anyone else, but it was founded in 1787 and So I'm I'm sometimes wondering if humans keep keep doing humans, you know We we be the people that we are and have long been and and that that's Where we go. I do wonder if some of the future will Look not quite so different where other elements will be un unrecognizable Yeah, I find myself thinking in that way to kind of slow down the what are we going to do when it happens? but I've had this thought and I think it was something that you'd said where if you think of the rise of fake news or what's gone on with Russia interfering in the US election by doing ad buys on Facebook and Google and Using tweets and bots and influencing people. I sort of wonder if the super intelligence is Joyce is going to do social hacking on humans and Maybe not technological hacking on other things It makes the dark scenario makes me begin to think that that might be possible So this is I would love to hear actually perspective on this because I'm worried about this as well So so my biggest worry is what happens to? Who has the value when with when and if AI takes over right? How does capital get distributed? I'm always really surprised. You know, I what when I watch Star Trek that somehow Picard lives on this like beautiful vineyard in France But there's like all these people on earth like how did his family get to keep this when there's no money Go ahead and take one last question and then we have a reception next door afterwards Hi So this is I like to call this the Blade Runner question so Blade Runner 2049 just came out and my friend came back to mania and told me that he didn't really he liked it But he didn't really like how it went through the same cliche of oh Android tries to become human And tries to feel emotion and what it's like to actually be alive And in you guys opinions What is a way that sci-fi can actually stray away from that cliche and actually make it so that androids Try not to like that's not the only goal of an Android is to feel alive Like what are other things that an Android could feel when we want to do? homicidal rage Because that's another common trope of what androids do I think part of the challenge is that Is that I think we don't know how to think about androids except as not quite humans I think you know we can we have all of these you know that This word hasn't been used but metaphor and analogy has been coming up in the background a lot of this You know data is oil these kinds of things and I think part of the problem is we don't know how to think about other lifeforms or other kinds of intelligences beyond human adult human child Pet right so I could think of an Android as being like my dog in some sense and I think we just we we struggle to come up with analogies or metaphors and I think some of the the very best kinds of science fiction whether in terms of movies or books Are those that are really able to understand that the Android of the future is unlikely to be any of the metaphors We've had except in as much as we create them and as Barry said oftentimes the creation is inspired by what's already out there And I think that makes it makes it hard, but it also leads to some of the very best work is the ones where you realize Actually the individuals not Quite what you thought I actually think that's one of the virtues of the original Blade Runner is Some of the characters are some of the replicants aren't just trying to be human. They really are seem to be something different Can I add something to that? Nicholas Negra Ponte in 1971 wrote a book called The architecture machine and he founded the architecture machine group which later became the MIT Media Lab And it's a wonderful strange little book. That's a theory of interfaces for artificial intelligence basically and He wrote in that book it is so obvious that our interfaces that is our bodies are Intimately related to learning into how we learn that one point of departure in artificial intelligence is to concentrate specifically on the interfaces then he goes on a little bit and then He says Let me see if I can find the the rest of it at the end of the page. He says That there's this big initial question that remains unanswered Does the machine have to possess a body like my own and be able to experience behaviors like my own in order to share in What we call intelligent behavior while it may seem absurd. I believe the answer is yes It might be I found myself return I've returned to that quote probably about every three days right now and in looking at this panel and in looking at at Frankenstein there seems to be something there too because maybe Mary Shelley is viewing a terrible intelligence as something that has a body that grows reason and and logic and and emotion as a result and maybe we do actually need Models that have that much fidelity. I mean maybe we do need replicants that fall in love In order for them to to begin to understand us Think we'll go ahead and cut it here It's a very fruitful line of inquiry on this especially what is the end result of what we build But we're inviting you to join us for a reception for the next hour next door in the damn fourth lounge This is just the first of what will be five events over the next year Next month you may have seen some posters We have a student film competition and we'll be returning to a thread along these lines with a graduate student three MT inspired round table and competition in February So stay tuned to the library channels and we'll let you know more about it But for now if you join me in thanking our panelists this evening. Thank you very much