 Well, good afternoon ladies and gentlemen, thank you very much for joining us here today. Thank you and welcome to our audience who are viewing this issue briefing on our webcast platform at www.weforum.org. Very excited to be here. This is our first issue briefing in the history of the annual meeting in Davos. They're slightly different to the normal fare you have here. They're not press conferences, they're not announcements and such. It's simply a tool for taking some of the high-class intelligence that we are able to convene upstays in the Congress Center and bring some of the cleverest down here to answer questions on topics which we know of particular persons and particular interests. As I mentioned, it's the very first one. It's on the subject which is a perennial favorite on our own forum blog platform Artificial Intelligence. I'm going to keep this very free and hopefully informal. I'm going to ask our two speakers who I will introduce in a second to make some brief remarks on the session they've been participating in this morning and also give us the benefit of their experience in the field of artificial intelligence. And then we'll leave it open for Q&A. So without further ado, I'm very, very pleased to be joined by Alison Gopnik, Professor of Psychology at the University of California in Berkeley. And Ken, thanks very much indeed for joining us. Ken is a professor also from Berkeley and his specialist area is robotics. Alison, perhaps we could start by giving us a bit of a briefing on your particular entry point into this subject and perhaps share some thoughts from the session this morning. So I actually study how children can learn as much as they do. And about 10 or 15 years ago, we started collaborating with computer scientists who were trying to design machines that could learn in the way that children do, partly to figure out about machines, but also to use that as a model for children. And I think one of the things that, points that Ken made in his talk, is that the interest of AI has been to discover that many things that we thought were going to be very hard have turned out to be pretty easy. And things that we thought were going to be very easy have turned out to be very hard. So for example, it turns out to be much easier to simulate a grandmaster chess player than it is to simulate a two-year-old. And what we've realized is just, this has made us realize just how much, even say, what a two-year-old learns involves processes that we don't really understand very well and that are very powerful indeed. And I think that was one of the themes that came up in the meeting, is that things like chess or theorem proving the great ways that nerds prove their machismo have turned out to be not actually so hard, whereas things like picking up a cop or recognizing a face have turned out to be. Or getting to an appointment on time. Okay, robots are much better at doing that because they aren't distracted. So I think that was one message. And then another message that Ken emphasized and others emphasized is the question about value. So we know that if we have a goal, we can design a machine with a lot of, by using a lot of smarts ourselves that can get to that machine. But it can get to that goal. But then when the decision becomes, which goals are actually worth getting to, that's something that's a much harder thing to get a machine to do. Should I please follow up on that? Thank you, Allison. So we actually had, we came to a fair amount of actual agreement that in particular, you've heard so much of the press coverage recently about Stephen Hawking and Elon Musk talking about the singularity. And so one idea I want to propose is that it's time to actually move beyond the singularity. And instead, I propose the multiplicity, excuse me. The multiplicity is the idea of many people, groups of people working together with groups of machines to solve problems. So this is, I think this is actually a much more constructive idea because it actually is something we're already doing. In a sense, if you think about Google's search engine, is a multiplicity. The way it works is that every human, it's using all the human linking structures, human provided linking structures. It actually is testing humans all the time when you gives you a list of responses for a search term. It sees what you click on and it uses it to update so it's smarter the next time. So it's using humans and machines. And that symbiotic combination is something we don't understand enough about yet. So one of the areas of interest at Berkeley and their labs is how do we start really developing a science for this kind of both collective intelligence of humans but also collective intelligence where humans and machines are interacting with each other. So that, we think that that, and I don't think the term multiplicity has been put out there yet. We're hoping if you as a press can circulate it. This is our first announcement of that word. We'll see if it catches on. But I think that that idea is, it's time to start talking about that instead of singularity. And the other idea we talked about this morning was that there's been a big advance in the field of robotics, which is we've always assumed that robots have to carry all their computing on board. It's just an assumption we never questioned. But it's actually, we're now realizing that it's not the case. That robots will almost always be in a place where they can get access to the cloud. And now we can do processing in the cloud so that very complex statistical computations can be done in the cloud. But then the side benefit is that all that data is available in the cloud so that it can be shared across robots. So that the collective can grow and exchange ideas and information and models between each other. So collectively they get much better over time. So this idea of cloud robotics is another new development. So actually this is a press conference. We have an announcement. Multiplicity is a new word in the parlance of artificial intelligence. Both, can you perhaps say your personal highlights or key learnings from this morning's session? There were four people on the panel, I understand as well, and computer scientists, not just yourself. So it's a collaborative effort advancing artificial intelligence. Anything that our colleagues here should be considering writing about. Well, there's one thing I think we completely agree on, which is that there's certain categories, I would say, that have to do with aesthetics and art, emotion, and design. And one is humor. I don't think we'll ever see a robot telling a great joke in our lifetime. And so there's a lot of limitations or essentially areas where I conceded to the opposition on those points. I think that they're not going to make better jokes than humans. Maybe I'll be wrong. But and then the other thing that surprised me at least was, fascinating study of human nature. Everyone voted on which side they agreed on at the beginning and then at the end. And almost no one changed their mind, despite an hour of us trying to offer new ideas. Nobody changed their mind. So this confirmation bias is incredibly alive and well. I think that's true. But I think also part of the reason for that was, in a way, there was, for a debate, there was more agreement than in some ways there was disagreement. Because I think there's pretty general agreement that computers have gotten to be very good at certain kinds of things. So being able to learn from big data sets, for example. Those techniques for statistically pulling out statistical patterns from big sets of data, that's been a real advance. And everyone could agree that that's been a real advance. Using things like Bayesian probability theory to make inferences, that's been a real advance. And everyone could agree that that was a real advance. The places where, if it's going to be solved, as Stuart Russell said, it'll be three or four breakthroughs, at least down the line, are things like creativity, being able to actually think of a new idea, being able to change the idea you already have, integrating the kinds of things that we humans do with emotion into what AI does. That's something that we don't understand very well. And figuring out how exactly we manage our values. What it means to have a moral system, what it means to have a value system, how it is that we can decide, as people at Davos do, for example, that we need a new value system, that the value system that most people hold isn't working and we need to persuade people to shift their values. That's a very, those are all things that are way out on the horizon, that have been very, very far removed from what current systems. Although it just came from another session, we were talking about manners and digital, this sort of obnoxious behavior of holding your phone on where you're talking with someone. And we were saying that, well, we really can change, I mean, manners and behavior can change. And an example of this is smoking, right? It was just not too long ago when it was very common to just have cigarettes at the table, et cetera. And now, at least in the US and I think many other countries in Europe, you really can't do that anymore. And we've adapted in a relatively short time. So maybe we need a call for a kind of manners, change of behavior in terms of other, in terms of technologies, as there should be a school of etiquette that we should maybe learn and then start to adopt so before it's too late. I think another, I think another issue that came up in the meeting that I think is worth pointing out is that in some ways they were sort of two separate questions. One of them is, will we have the super intelligent computers any time soon? And at least things like being able to revise and change what we think, being able to be creative, to being able to have moral values. I don't think any of those are on the horizon. But I think another thing that came out is there can be lots of damage without having the super intelligent computers. So natural stupidity will beat out artificial intelligence any time for really screwing things up. We have plenty of natural stupidity. And the combination of natural stupidity and artificial intelligence can be a really dangerous combination. So we do have to think about how we regulate things like autonomous weapons, an example that Stuart gave, or things like a machine that can suddenly change the way the stock market works. And we need to think much more about how to do that. Now, that all is true even with machines that aren't anywhere nearly as intelligent as we are. It is still true that those machines, especially in the hands of occasionally incredibly intelligent but also amazingly stupid humans, especially grown-up humans, it might be better if they were just in the hands of two-year-olds. We should have two-year-olds at the World Economic Forum. Yeah. That would be a great session. Noted. Let's work with that. Before I hand over to questions, what was the voting? Give us an idea of what the voting was before the end of the session. Okay, so at the beginning... We won, by the way. Let me just point this out. They won by a slim margin. But it was about 50-50 about who we thought it was that machines would make better decisions than humans. And at the end, it was the same. There was a few points that they had changed. They had beaten us. It was interesting that it was 50-50. I wouldn't have predicted that. I wouldn't have either. Yeah. Very interesting. Okay, let's have some questions. Okay, we'll have a microphone down here for the benefit of our audience. Mike, could you also give your name, please, and your outlets? The gentlemen will take yours and Michael there in the front row. Hi, it's Jim Edwards from Business Insider. And I would just want to ask the really dumb question about Elon Musk, who keeps telling people that robots will come to kill us all as soon as they become smart enough. Is he right? Will the robots kill us all as soon as they become smart enough? No. No, absolutely not. First of all, thank you for being from Business Insider, which ranked UC Berkeley as the number one school to study robotics. So thank you. Sorry, Casey, isn't it? But I will say that, no, this is actually, this kind of fear is exactly what the wrong thing. What we need to be doing now is being constructive, not talking about this singularity, because I don't think that's even close. I think what we, the much better, more interesting thing is this multiplicity idea. Michael De Allen from Portugal and Denmark. Could you please broaden out the perspectives in these cloud robots you were talking about? Well, so there's really five, very quickly there's five advantages to having robots connected to the cloud. The first one is access to data, because you can suddenly, you can carry maps, you get access to maps and all kinds of models of objects around you. It's enormous. Second is computation. So when you want to be able to plan a series of motions, that is actually right now with uncertainty as the central problem for robotics, it requires vast amounts of computing, and you can't carry that much computing on board, so you can do that in the cloud. And the third one is the idea that the cloud is actually providing the ability for robots to communicate with each other and share information. So every time a robot is acting in the world, they're doing essentially an experiment. So they report back the outcome of that experiment and that gets shared and integrated together. The fourth one is the idea that the cloud is facilitating the exchange of humans sharing software, and this is accelerating research in robotics. So there's an open source platform called ROS, and something else called Robot, Robotics has a service that is a new idea, which is basically where you'll be able to very quickly, it's a software architecture that lets you very quickly get access to robot algorithms. And the very last one is that no matter what system you have to program your robot, there'll always be cases where things fail, or it gets stuck. And then, because you're access to the cloud, the robot should identify those situations and then can tap into a human call center. So it sort of reverses what we have now where we get a robot on the phone, the robot will call and get a human on the phone, it will help diagnose the problem and resolve it. The robot will say, oh no, not one of those humans again. Yeah, so I think those things are all, that's opening up all those five channels are brand new and they're really accelerating the field. Fascinating. Gentleman at the back there, please. And then after that, the gentleman there in the middle. Hi. I'm Zontek. I just would like to add to that question before, do you have an idea or a vision what kind of jobs are going to be replaced by robotics or artificial intelligence the next five to ten or even beyond? Yes. So I do have one answer to that and I'm going to yield it to Allison in a minute because in the Swiss expertise, they have scheduled me to be at a panel in seven minutes. So I have to dash over to there and I apologize. But I will say this, that I do have an idea about that. I think that there's huge opportunities right now for teaching. That there's a vast need for teachers, not just in classrooms but for teaching all kinds of things. And that is under tapped right now. So could we find a way economically to allow people to be rewarded for teaching? We feel, and I think most of us have experienced, teaching is extraordinarily rewarding. So someone out of work, if they have even part-time jobs teaching in some fashion, I think there's enormous potential for that. And so if we can transform the people who are put out of work, which will be inevitable in the short term, start to use technology to allow them to teach in new ways. I think that could be a really constructive outcome. So is it okay if I depart? Please do. All right, I apologize. Thank you very much. You're in excellent hands because Allison's terrific. All right, thank you. So get your hard questions ready now. We just have Allison here. Do you have any comments, Allison, on that one point? The jobs and I guess the most obvious economic impact? Yeah, I mean, I think it's fairly clear that jobs that were just as jobs that were once mechanically done by people in factories are now being done by robots. Some jobs that were done using our minds and brains are going to be increasingly done by artificial intelligence. But I think part of what comes out of the session is that there's enough scope for the things that people are much, much better at than computers. That the optimistic thought, and those are the things that are the most human ones anyway, being able to do things like have creative new ideas or consider changes in values. If more of us were engaged in doing that, if we could find a business model for creative thought about progress in the future, then that might not be a bad thing to happen at all. Okay, if I may, we'll take one more question before we close this gentleman here by the middle. Tetsuya Gorashi from NHK Japan, nice to meet you. I have like three questions, but I'll just take one. I've been researching about the advancements in technology, and well, in the past five years there's big advancements. What do you think the societal drivers of... Those advancements. There's a very interesting point, Ken mentioned talking about his multiplicity idea, which is that a lot of the big advances have come because the computers have really been using these millions and millions of examples of these incredibly high-powered computers that have been chained together in a way that they never were before to solve problems, namely all the human beings who are using the internet. We have these fantastically powerful computers, and they're all working together to put cute cats out on the internet. And then Google can actually use the output of all of those people finding the best cat picture to get an algorithm that actually lets you recognize cat pictures, which is something that every one-year-old can do with much less data. So some of the advances have come because there's been big improvements in our ability to do things like pick out statistical patterns. But a lot of the advances have come because there's now data available, not only big data in large quantities because of the internet, but also data that's been pre-digested by human beings. So to take another example, machine translation depends on the fact that we have access to millions of human beings who are translating data all the time. And a lot of the advances, you know, the terrible dystopian picture in the matrix of, you know, we're all lying there thinking that we're having fun, but really we're just feeding the machines. That's actually just true. So as we're sitting there putting up Instagrams of our cute cats, what we're also doing is giving data and information to machines in a way that they didn't have it before. So it's an interesting question. Is a computer aided by a million human beings who are doing the work without knowing it, a super intelligent computer, or not? It's not super intelligent in the way that a baby is super intelligent, but it can do things that we really couldn't do before. So I think there's quite a lot of reason to believe that having the internet, having computers be able to have access to all of those brains and the output of all of those human brains and use that in their computing has been the thing, or at least one thing that's made the big difference. Alison, a final word if I may. What would be the focus of your work in 2015? Well, what we're interested in, so let me back up for a minute and say what Alan Turing proved all those years ago was that if we can take any process and describe it in a systematic step-by-step way, then we can program it onto a computer. So when we say computers can't do things, what it means is we don't really understand how they're done yet. So the thing that we're trying to understand is what is it possible for people to be creative? That's something we just kind of take for granted. But if you think about creativity from a computational perspective, what it really means is there's this infinite enormous scope of possibilities, possible ideas that we could have, possible skills we could have, possible ways we could organize ourselves into a society. Somehow you have to decide which one of those is going to have the best outcome without knowing it in advance, and somehow human beings, in fact, not just even, but especially three and four-year-olds, can get a sense of, here's a crazy weird idea, but it's sort of in the ballpark of something that would work. And we don't really know how that's possible for adults, creative adults who are innovative like scientists, or for two and three-year-olds who are pretending, or for computers. So what I'm working on right now is to try and see, can we give some kind of story about how things like play and imagination could actually be implemented in a computational way. Well, thank you very much indeed. We're running out of time, and like all Swiss organizations, we must.