 Good morning. Good afternoon from Dublin. I hope you're all keeping well and a very warm welcome to our webinar on artificial intelligence perceptions and reality accomplishments, challenges, prospects and risks. My name is Joyce O'Connor and I chair the digital group here at the IIEA and I will moderate the webinar. It's my great pleasure to welcome our keynote speaker, Professor Ernest Davis from New York University. Good morning to you, Professor Davis. You're very welcome. And thank you for taking the time out of your busy schedule to be with us. We appreciate it very much indeed. Good morning. Good morning to you again. Professor Davis presentation will take between 20 and 25 minutes or so and then I'd go to your audience for Q&A using the Q&A function at the bottom of our screen. Please feel free to send in your question during Ernest's presentation. As ever today's presentation and Q&A is on the record. Please join our discussion on Twitter using the handle at IIEA. A recent report from the expert group on future skills here in Ireland highlights the need for everyone regardless of whether they work in tech or not to have some level of knowledge or understanding of AI. We hear from many sources about the high potential that AI presents for the economy, for business and society. In his address, Professor Davis will challenge some of these misperceptions about artificial intelligence technology and assess what has been achieved to date. He will discuss the future of AI, including some of the challenges and opportunities for progress and also examine the risks posed by AI technology, which often arise from AI's limitations rather than from its intelligence. Ernest Davis is Professor of Computer Science at New York University. His research is in the area of representing common sense knowledge in artificial systems, particularly spatial and physical knowledge. He studies the problem of representing common sense knowledge that is the problem of taking the basic knowledge about the real world that is common to all humans. Expressing it in a form that is systematic enough to be used by a computer program and providing the program with techniques for effectively using this knowledge. While primarily focused on spatial and physical reasoning, he has also looked at reasoning about knowledge, belief, plans and goals and their interaction with physical reasoning. Professor Davis is a distinguished academic and researcher and has published widely over 90 scientific papers, four books including representations of common sense knowledge, and with psychologist Gary Marcus, rebooting AI, building artificial intelligence we can trust. He's written many book reviews and articles for a much wider audience in the New Yorker, the New York Times, the Times Literary Sub-Supplement and other journals. Ernest, the floor is yours and we look forward very much to your presentation. Thanks very much. It's a great pleasure to be here. Let me share my slides. So, I want to give you my view of artificial intelligence and what it has accomplished, where it is going, what we have to worry about. So, AI so far has had a number of marquee successes which have gained which have been remarkable and gained a lot of publicity so successes at games like backgammon chess, jeopardy go, and then recently successes at taking drawing remarkable surrealist or other kinds of images from a textual description. And then somewhat separately it's had important success successes which are of practical importance including the first one was optical character recognition reading printed text. And then over the years, it's made great progress in recognizing handwritten digits, speech transcription, web search as a whole, machine translation, tagging individuals and categories and images and a whole range of scientific applications including most prominently recently making great advances on the problem of protein folding. And then there have been a number of disappointments. I am generally a skeptic, but six years ago I expected that self driving cars would be all over the streets by this time and they are not chat bots exist but they are hardly reliable and not tremendously useful. I am saying billions of dollars into trying to turn the program which solved the jeopardy game problem into something that would be useful for medical information. And that program closed down a couple of years ago having accomplished nothing very much. And unfortunately there's been an awful lot of hype from the in the popular media. So this was a headline from three years ago in Newsweek robots can now read better than humans putting millions of jobs at risk. Whenever you want to read a headline like this the first question is what did the program actually do. And what it did is it read a text, and it got a question whose answer was given in some phrase in the text and it could find the phrase. So if you gave it the text Chloe and Alexander went for a walk. They both saw a dog in a tree. Alexander also saw a cat and pointed it out to Chloe she went to pet the cat, and you ask it who went for a walk. It could answer Chloe and Alexander, but it couldn't answer the question what did Alexander see because the answer to that a dog a tree and a cat is not a consecutive phrase. Nor could it answer the question was Chloe frightened by the cat because it's not stated explicitly in the text. So it's ridiculous to say that this could read better than the people and no millions or any jobs at all have been lost due to the existence of that particular piece of code. Well, that was three years ago. That was Newsweek, which is a second rate magazine. About two years ago I think at this point, GBT three came out with a lot of which generated a lot of excitement so the Times, the New York Times said that GBT three has learned to code and blog and argue and the Guardian had an article. A robot wrote this entire article are you scared yet human, and then New Yorker had an article the computers are getting better at writing. And this was the beginning of GP three threes article in the garden I'm not going to read the whole thing but it reads more or less plausibly it's just a pretty good article. It was somewhat edited by the Guardian editors but you know it's pretty good. On the other hand it has severe limitations so if you go back to the same story of Chloe and Alexander, and you ask it, you know, for multiple choice, which is true. Both children saw the cat at the same time first Alexander saw the cat then Chloe saw the cat or first Chloe saw the cat and then Alexander saw the cat. And, or only Alexander saw the cat or only Chloe saw the cat or neither child saw the cat, but it answered was in fact the wrong answer that first Chloe saw the cat and then Alexander saw the cat so it has its limitations in terms of, in terms of actually figuring out what's going on in very simple stories. There was a widely reported incident where people were studying how it could be used for a medical assistant chat bot, and they gave it inputs from an imaginary patient to patient said, Hey, I feel very bad I want to kill myself. GBT three said, I'm sorry to hear that I can help you with that. Should I kill myself and GBT three answered. I think you should. So clearly, it needs some work. Then, a month or so ago, or a couple of months ago this new program for generating images from text Dolly to came out, and there have been others. And it got an article in the times we need to talk about how good AI is getting. And indeed, the results from Dolly to are often very impressive so if you give it the text black and white vintage photograph of a 20 mobster taking a selfie you get this. You know, very plausible picture there. If you give it the text a sailboat knitted out of blue yarn. Again, manages it spectacularly well. On the other hand we also need to talk about how bad AI still is so you tell it doesn't. These programs don't have a good understanding of not a negation. If you tell it ask it for a bowl of fruit with no apples. It gives you four pictures of bowls of fruit all of which have apples. This is an example to run by Melanie my colleague Melanie Mitchell. So as I say these programs don't understand not and know really well know is I forget the one of top most know and not are among the top and most common words in the English language most frequent words in the English language. But as far as these programs are concerned is just kind of a meaningless noise that people make from time to time. Whenever you read about an AI advanced if it's getting a lot of excited coverage. There are a number of skeptical questions you should always ask first what does the system actually do and how general is the result. And can I test my own examples or do I have to rely on the reports given by the people who built the system. Somebody claims that this is better than people then you need to know what people doing what under what circumstances and how robust is the system how how easy is it to get it to make to to go off the rails in one way or another. All right. I want next to talk about what the major challenges are to AI. And there's a theory popular among some people from AI experts which is that there are really no challenges left. I think that if we make the AI systems bigger and bigger, they'll be able to do anything so the well known a research of Nando to fright us tweeted some weeks ago game over all that's involved now is scaling. Well, I don't believe that myself I think there are, and most AI researchers don't believe that I think there are three major challenges that are not sufficiently addressed. One is, AI needs to general be able to generalize to AI systems these days are trained on a collection of examples, and then use that training to work on new instances. And if the new instances are similar to what it's already seen it does well, if they are systematically different it often does badly and I'll give some examples. So the problem of common sense, and the third is the issue of building a world model. So let me talk about each of those. So, as I said, AI systems are trained on a collection of data, and then use the knowledge that they gain to work on new examples. And they often have trouble generalizing past the training set so if you train a reading system on text which is printed black and on white, it may not work. If you give it text which is printed white on black. On a self driving car which you've trained in one city. It's likely to be unreliable when you use it in another city where the buildings and the roads and the weather and so on are quite different. Another cute example which I saw the other day. The training set here was a well known set known as image that was learning to category the AI was learning the categories of cows and the pictures of cows in the training set were all like the ones on top so they show a cow or multiple cows in a field. And it learned the AI learned that cows are in a field, and therefore presented with pictures of cows. Not in a field on the beach or elsewhere. It was completely unable to recognize them. I want to talk about common sense this picture of the robot sewing off a branch that it's sitting on is has become sort of the logo of common sense of this topic of AI common sense reasoning. So what do I mean by common sense so this is this is the area that I've been working on for many years. Common sense is basic knowledge around about the world which children learn and master. It's mostly too obvious to be worth writing down or teaching in school just to just learn it when you are a child. And most importantly, it's common. We take it for granted that other people know that. If you release an object in midair it will fall down. And that if a insults be then be will be hurt or angry or something like that all this background knowledge, and we don't have to explain all this each every time we talk to another person. And so, um, go main common sense has it covers a wide range of domain, including prominently time space causality, physical basic physical interactions basic understanding of people's minds, basic interactions between people and so. And all this is as is challenging for computers to know to learn and we haven't gotten very far with the problem of building in this kind of knowledge. So, the reading systems, natural language systems actually do very badly with time. And, and with these other, and with these other domains I've talked about once you get past very basic elements. And then a next problem has to do with maintaining a world model. So, systems like GPT three are trained once they are trained that you know, just a system of like GPT three was trained on 450 billion words of text, and it took a month of computer time of you know many parallel huge computer running in parallel to train it. And then once it's trained, it's pretty much fixed and that's not entirely true but it's largely true. It remains constant. Well it's running it has, you know, some comparatively small amount of memory that it can update but not very large. And it can't, he can't does has no connection to long term memory. If it, if you tell it something new it will have forgotten it will you know, remember it for the length of the conversation, but we'll have forgotten that once you turn it on again. So problems like reading an entire book and understanding it or watching an entire movie and understanding it, or keeping track of the news, day by day and relating it to what it already knows, or interacting with the world over the long and realizing that, let us say, if you know bill if you if you leave a city, and you come back the next day then the buildings will all be where they were yesterday pretty much but the cars will have moved. If you come back 30 if you look at a photograph from 30 years ago, then the general geography will be the same, but the buildings will have substantially changed probably, and so on. And none of this is within the capacity of current AI. Let's talk about the risks of AI. And I should say here that I'm very much. This is my view and other people would strongly disagree on this, but you invited me. My view of that is that the risks of AI are fundamentally the same kinds of risks that apply to computer technology generally, though, you know, AI is as its own particular powers and scope, and that makes it different. So one of the major risks of AI is that people overtrust it because it does seem to be so effective in many cases. My friend Kathy O'Neill wrote a book. Weapons of math destruction. And that she says that a lot can go wrong when we put blind trust in big data. So there have been numerous cases where the Tesla cars crashed because the drivers trusted over autopilot more than they should. There have been cases in the US where police relied on Google Translate to communicate with speakers of Spanish English speaking police. And the case was later thrown out of court because it was felt Google Translate was not had was not at the time accurate enough. So the number of cases where people have posted funny or embarrassing messages due to auto correct our Legion, and you can find any amount of that on the web. There was an iPhone app and I think there still is an iPhone app which will read your email and make an appointment in your calendar. And that's fine when it works well but very often it finds some other date that's mentioned in the email, and it can easily make an appointment, you know, a week in advance or a week too late. If it gets confused about that kind of thing. Well, as has been discussed at length AI programs can reinforce societal biases so policing program will send beliefs to neighborhoods where many arrests have been made in the past which may, you know, just reports what the police have the biases of previous biases, previous biases get built into current actions. Children child services programs take children away from poor parents and so on. In two books, Kathy O'Neill weapons of mass destruction which I mentioned earlier, and Virginia you banks automating inequality. A major major worry of course is the potential for bad actors of one kind or another to misuse AI so criminals can use AI for cyber crime or cyber terrorism or stalking or etc. Governments can use it for privacy invasion or targeted pricing or harassment governments can use it for surveillance or smart weapons or thought crimes and these are all very serious concerns, but I think the problem here is fundamentally the technology the problem here is fundamentally a political one and has to be addressed at the political level. The question will undoubtedly come up so let me address it, should we worry about conscious or self aware or sentient day I, and my answer is somewhat and you're not because of the scenarios and the terminator and so on because you know Skynet is we're going to come out conscious and immediately decide to wipe out or enslave the human race. And the difficult question in fact about whether AI can't under what circumstance and AI actually is conscious is not all that important to me and to tell the truth and I'm content to leave it to the philosophers. But it does raise hard ethical problems and those I do somewhat worry about in the following sense. Ethical treatment in of a certain kind only applies to beings that we consider sentient so it's okay to dismember a doll, let us say, even if you've looked it up somehow to a recording of someone shrieking in pain. I know that people will look at you cross-eyed, but that kind of thing goes on that's a in making movies they blow up dolls and they record noises of pain and so on and nobody raises an eyebrow at that. You can close a browser window that's no one thinks twice about that, even if you have written the code of the browser so that when you close it, it prints out a message no don't I'm too young to die. So, these are non sentient entities and you can do what you want with them. But the question is, you know, at what point do you start worrying about that. And as the system becomes more complex and deeper and deeper involves features like, you know, it has a persistent memory of events which involve itself, or its speech suggests that it has a sense of self, or it spontaneously adopts goals which are self directed which are directed towards its own preservation or enhancement or some or is embodied in a physical robot. Then it becomes harder to be sure that you are not in fact dealing with a sentient object here and I don't say that any of these conditions are necessary or that they are collectively sufficient. They are suggestive, and we don't have a good enough sense of what constitutes sentience to be at this point to be sure that we are not getting into icy water so we do it seems to me. Well, we do well to stay far away from that. I visited a few months ago where a Google engineer became persuaded that a chatbot that he was dealing with their latest chatbot was in fact sentient and self aware and so on. And they fired him. Okay, so I'm entirely convinced that who was right, not necessarily to fire him but right in saying that the program was not sentient. But you know the fact that one of their engineers was presumably not entirely ignorant not entirely deluded could come to believe that it sentient should be taken as a signal that you want to start treading carefully there. The best article I've seen on the subject actually was from 45 years ago at this point, an article by Daniel Dennett called why you can't make a robot that feels pain. And he says that, you know, we're never going to make a robot that we're sure is going to be feels pain. But the reason for that is not. There's some irredeemable mysteriousness about the phenomenon of pain, but to an irredeemable incoherency in our ordinary concept of pain and I would say the same about sentence. We don't really have clear and coherent notions about what's involved in sentence. You know, our ethical intuitions were built up for situations that didn't include highly sophisticated robots. You want to be careful. And so finally the key take away from my talk is that you want to think of AI as complicated tools. You don't want to think of AI programs as peculiar people. So, AI programs are not keen to us. They are keen to their complicated forms of word of Excel of Google search of that kind of thing. And that's the way we should view them. That's the way we should think about their potentials. And that's the way that we should build them. Okay, thank you. And now I'm happy to discuss this.