 Rhaol fgrifinoedd, dros y ddweud gan gweithio o'r bron, efallai nawr yn y gafodd hon, a ond y cwmfor pensionerol yn y Daessu Gweithreidol. Mae'r iawn rydyn ni'n Iesu Gweithreidol yn y niwn 1 agnodol gyda'r newydd. Mae hyn sydd wedi ceisio i mi wneud ar ôl i amlwgau amlil oherwydd am y cyfanyddau. Mae'n i ymhwyl mewn gwiriaeth cerdyniaeth cyfan. Mae'r cwestiynau o'r gweithreidol er mwyaf amlinef og y llaw. We're also curating questions from our social media audience and I'd like to say a very warm welcome to our audience watching us live on weforum.org. My name is Oliver Khan. I work for the forum. I'm proud and pleased to be joined by Rodney Brooks, the founder and chairman and CTO of Rethink Robotics, a manufacturer and, of course, Stuart Russell, professor at the University of California in Berkeley, USA both experts in their own right. Rwy'n golygu, oeddwn yn gweithio'n cael ei weld o'r hwnnw, ac rwy'n gweithio'n gweithio'n cael ei gwael i'w ffraith. Felly i Elon Musc yn ymweld i'r lleol, oherwydd yna'r aeitfawdd ar gyfer y gwirioneddau bryd yn Cymru? Yr hyn o'r pethau'n gweithio'n gwirioneddau. Rwy'n gweithio'n gwirioneddau'r pleiddoedd fel Ylon yn gweithio. First of all, Elon is a very brilliant guy, and people take what he says very seriously. He had recently read a book by Nick Bostrom called Superintelligence, which talks about the long-term question of AI, which is what happens if we succeed in creating general-purpose superhuman intelligence, which means systems that can make decisions that are much more accurate than those made by humans, that take into account more information, that look further ahead into the future. And so, very roughly, you can think of it as a little bit like playing chess with a machine, where you all understand that machines can defeat us very easily at chess because they can look further ahead into the future and they're more accurate. And now just extend from the chess board to the entire world. You don't really want to be playing chess against a superintelligent machine with the world at stake. So this is the question that Elon is talking about. But I think the way it's been portrayed is that, number one, it's described as if he's talking about the present. He's not talking about the present, he's talking about the distant future. It's very hard to say how far away that future is because to reach that level of capability in machines requires quite a few conceptual breakthroughs and those are very hard to predict when they will take place. The other thing that people are misunderstanding perhaps is that when he's saying this is potentially a risk to humanity, it's a little bit like saying that, as HG Wells did in 1912, that the creation of atomic weapons would be a risk to humanity and that was certainly right. But it wouldn't make sense in 1912 to say, oh, then we better stop doing physics. And that's the lesson that people are taking from what Elon is talking about, that perhaps we should stop doing AI because at some point in the distant future, success in a certain kind of AI in general purpose intelligence might pose a threat. But the way I interpret what Elon is saying is posing a challenge, that if the AI community is working full steam ahead trying to create general purpose intelligence, then it's the job of the AI community to understand how to make sure that the systems we build remain completely under our control. And at the moment I would say that AI community has not put enough thought into the question of what if we succeed. Also respond, Stuart mentioned chess playing as an example and I think that illustrates an important distinction that many people outside the field of AI don't make. Inside the field of AI we make it but it's so natural to us we don't necessarily feel we have to explain it. And that's the difference between performance and competence. So a chess playing program has very high performance and today's chess in the 90s IBM's Deep Blue beat the world chess champion. Today you can get any number of programs run on a laptop that are better chess players than any human being in history. So they have very high performance but they don't have the competence that would come when a person plays at that level. A person who is a chess master can teach someone else how to play chess better. Those programs can't teach how to play chess better. A person who's a chess master knows that there's a lot of psychology in playing chess. Those chess programs don't know that. Those chess programs aren't aware that people exist. Those chess programs aren't aware they're playing a game. They can't play tic-tac-toe. They can't generalize. I think this has spurred people on deep learning a technique has been very, very successful. More successful than I think five years ago most AI researchers would have predicted. At a couple of tasks, one is for speech understanding for the low level decomposition of speech signals and so our speech understanding systems have gotten much better. Secondly, the very popular demonstration is in image labeling. There was a story in the New York Times not so long ago showing how these programs could label images and the example they show there is some people playing Frisbee and the system says this is a group of people playing Frisbee. But if a person can label that image they can answer a lot of other questions about that image that the programs can't. They can answer how many people in the image happens to be three. The programs can't answer that. Where's the Frisbee? Can't answer that. What is a Frisbee? Can't answer that. Who is interacting with the Frisbee just recently or in the future? Can't answer that. Who is looking at who? Can't answer that. These are things that a person that could label that image would understand about the biggest story around Frisbee instead with deep learning. It's just able to give that label. And I think that makes people like Elon Musk mistakenly assume that we have super human intelligence level coming almost immediately and I think that's probably a long, long, long, long, long way off. And Romney, I wanted to broach that risk subject early on because I wanted to now come to you and as a manufacturer of robots that generally do nice things, helpful things, you've said recently in the media that this year as a producer of robots is going to be a tipping point as such and a knee in the curve in the industry's growth, I believe. Tell us a bit more about what's happening on the ground and maybe from a business perspective. In my particular case my company is building robots to go into manufacturing. The traditional robots we've had before have been dangerous and you have to put walls around them, cages around them and they are very difficult to program. So what we've done is try to build a robot that it's safe to be this close to and I am, you know, a stewardess safe from me and that it's easy to get to do a task. And they are going into manufacturing in the US. We've been having them for about three years, now putting them in Europe and in China. People often say to me, how could you want robots in China for manufacturing but as many of you probably know, that manufacturers in China are having a lot of problem getting enough labour. The recruitment and retention are the two biggest problems for manufacturing in China. China has an incredible, incredible advantage over the rest of the world because it has this amazing supply chain that's been built up over the last 30 years. So manufacturing is not going to go to Africa or somewhere else where there may be more population. It's going to stay in China but just, you know, wages have been going up 15% per year in China so it's not like wages have been held down in manufacturing but there's not enough people who want those jobs and so there's a real demand for automation and robotics for those to maintain China's lead in electronics and other manufacturing. Okay, let's get a sense of who has any questions at this stage. Okay, so let's go to the lady here in the front row please. Could you give us your name and your affiliation please and also mention who you would like to answer your question? Please don't make it me. Thank you. Thank you, Olivia. Yan Qing from China Business News. I would have this kind of feeling. But no matter what, it's not entirely clear how important it is. If you want to look at the movie of this machine man, in the movie, a young man, a machine man, a machine man pushed out, is he really a machine man with this kind of feeling? It's not clear yet. Is it important? In many cases, he has the same reaction to the machine man, the same behaviour. So whether we treat robots as if they're conscious will depend to a large extent, I think, on how the robots look and how they behave and the question of whether they are actually conscious may remain a mystery. In some ways, it remains a mystery for us. You don't know for sure that I'm conscious. You just make that assumption because I look like you and I behave like you and you know that you're conscious and so you guess that I am. That's the state of the art in consciousness science right now, is I know I'm conscious and I guess you are. That's about as much as we know. Why the question matters? Because if the machine can have consciousness one day, they will control human being and now the other went around. That's the core of the question I think. I think this is an important thing to get straight. The ability of a machine to control the world or control human race has nothing to do with consciousness whatsoever. The ability of a machine to control the world or to beat me at chess has to do with its ability to make good decisions, to make decisions that achieve its own objectives. That doesn't require consciousness. Chess programmes are not conscious, but they make much better decisions than I do. There's absolutely no reason to think that we require consciousness for machines to make good decisions in general in terms of their interactions with humans and their ability to control the physical world. The only real question that consciousness raises is the question of should we accord rights to robots? If there arises a movement, if a large part of humanity believes that machines are conscious, they will be calling for machines to have rights. At the moment machines don't have any rights any more than cars have rights. That's the one question where it makes a difference and we may have philosophers and lawyers arguing about this for centuries without reaching any conclusion. I'll come back to consciousness in a minute, but I want to answer your first question. You talked about perception, action and cognition. I'm going to talk about each of those in turn. Research on perception is going exceedingly well right now and part of that is driven by the video game industry. The video game industry has made 3D cameras very low cost and with 3D cameras being low cost, it means researchers at any university in the world can now work on 3D vision. When you go to a computer vision conference or a robotics conference, there are many, many, many papers about trying to understand cluttered scenes, the sort of scenes that happen in real life where there's lots of stuff there. Now they're working differently than a person. They're labelling. They're understanding the function of the objects they label in the same way that a person labels them, but nevertheless it doesn't matter in some sense. It's going to lead to a lot better robots. On the action side, we of course have robots that can move on wheels well. We don't have them doing that well on walking yet and the thing that's closest to my heart is dexterous manipulation. We've been working on that for over 40 years in robotics and AI and the robot hands we can build today are really not much better than they were 40 years ago. We have not made very good progress with that. This thing that we each have two of are amazing, amazing devices. In order to make progress on them, you need to make progress on four things at once. You need to make progress on materials because our hands do a lot of what they do because of the material properties of our skin. You need to make progress on sensing within the skin which is hard for us to figure out how to do and video games haven't helped us there. You need to have complex mechanisms because we have so many degrees of freedom and you need new algorithms and you need all four moving together at once and that's been very hard to get research teams funded to work on. Dexterous manipulation, we're not doing very well in walking. Cognition, we have systems which can plan better and Moore's Law has been very, very good to planners just faster computers have meant algorithms have gotten a lot better. My own experience is it's very hard to get people in industry to want to adopt programs that make decisions through planning by themselves. They want the machine to do the same thing every time. They don't want it to be better sometimes and other times. So getting that out there is a challenge and requires a different way of thinking. We're all used with our smartphones. We're all used to the software sort of automatically being updated and getting better but in industry that's not the way people are thinking at the moment. They don't like to lose that element of control. Getting to consciousness. Will we ever have conscious machines? I think I'm a machine. I am a machine. I don't think there's anything else. It's just biomolecules. Biomolecules working on the physical principles. So I think we have an existence proof that machines can be conscious. I think we have no idea what consciousness means except our own perceptions of it as Stuart says. And I think it's going to be a long time before we do. And the last point is you went from consciousness to control. I created a bunch of intelligent machines and those intelligent machines sort of got a little annoyed with me and they thought they knew better than me but they didn't try to kill me. My four children still love me. So I don't know why everyone thinks just because you have a smart machine that it's going to hate people. My children don't hate me. And they're machines. That's a common idea that the machine will suddenly wake up and decide that it doesn't like people. That's not really the issue. The issue is how do you put a purpose into a machine and objective such that when the machine which is by assumption more intelligent, more capable than we are when it chooses what to do to achieve that objective we want to still be sure that the actions it takes are ones that we are happy with. King Midas a long time ago he's a famous figure in mythology he asked for a special power which is that everything he touched should turn to gold. And what happened of course was he got what he wanted and his water turned to gold his family turned to gold his food turned to gold and he died of starvation and thirst and misery. He got exactly what he asked for and then it wasn't what he wanted. So there's a famous paper by Norbert Wiener who's a very brilliant mathematician from the 20th century written in 1960 and he said if you put a purpose into a machine you had better be absolutely sure that the purpose is really the one that you desire. It's very difficult to be certain of that when the machine is going to carry out this purpose it's going to come up with plans that never occurred to you as a human being as being ways of achieving that purpose. So this I think is the core of the problem that Elon Musk is talking about that Nick Bostrom is talking about that other people are talking about machines do things in ways that you don't expect you still want to be sure that you're happy with the results and how do we make that happen that's a really difficult question. Fortunately at the moment it's not immediately urgent because the machines are not that smart you know chess programs just play chess moves they're not in the taking over the world business but it's hard to predict when those breakthroughs will occur and I always like to give an example from history of chess and nuclear physics. So Ernest Rutherford was the leading nuclear physicist in the world just as Rod is one of the leading roboticists in the world and in 1933 on the 11th of September he gave a famous speech where he said that essentially we will never find ways of extracting energy from atoms. The next morning Leo Zillard another physicist read about this speech and invented the nuclear chain reaction and a few months later patented the nuclear reactor and described how a nuclear weapon would work. So the prediction that something is a long long long long long long way off in the future Rutherford said it would never happen and in fact it happened in less than 24 hours so it's a little bit difficult to make these predictions of when those breakthroughs will occur I think in AI we're somewhat further away more than one breakthrough maybe four, five, six it's hard to say but I do think that we need a lot more work on understanding how to put the right purpose into a machine so that we like the results. I'll just respond to Stuart there I think he and I disagree somewhat here I interpret the way Stuart talks as we have the current world and then we have this machine whose purpose we don't quite know very well but in reality technology doesn't happen that way so before we have a machine which does something really bad to us we'd have a machine that was quite angry and annoying and before that we'd have machines which were slightly annoying and so I think we will have a process of evaluating what we want to suddenly appear all at once in a sense nuclear bombs did appear at once but with $3 billion of $1940 investment and thousands of people working on it in a war situation where it was kept secret but I think these sorts of things once we don't have any machines with any purpose at the moment in any desires or goals we don't have any, not even in labs they have objectives at the point that's what we mean by purpose is simply the machine has some objective which is supposed to optimize that's all it means it's not just coming from out of the sky and suddenly this super smart machine it's going to come gradually as we've seen with self-driving cars self-driving cars didn't suddenly get on the road it's been a long process since 1988 we were talking about earlier this morning and still they're not out there and still we're pretty sure they're not quite good enough yet to let go on the roads You can see a lot of effort into getting the right panel here because we want them to have a bit of a frisson and bang against each other We sound like we have the same accent even though we come from opposite sides of the world I just want to put a shameless plug-in for our issue briefing at 6.30 this evening which will actually have some neuro-scientists so you can ask them about memory and plant and those kind of consciousness questions then and hopefully we'll get even more a wider, more diverse array of answers to that one Any more questions from the floor? Okay, lady over there, perfect Thank you My name is Jie Xiaoting from Xinhua News Agency Xinhua News Agency this is the state media of China My question is about what if the artificial intelligence grows into a different thinking pattern that we cannot understand that we cannot communicate with them For example, we have the artificial neural networking people put things into them and they can intimate the humans handwriting but you cannot change the handwriting if you are not satisfied with some part of it because you don't know which line of their expressions is slightly different You can't talk to them actually You can just educate them You cannot directly change them What if the artificial intelligence think in a totally different way that we cannot communicate with them? Thank you For example, right now we're not sure how intelligent dolphins are We don't know where dolphins are conscious We're not good at understanding Our understanding of animal cognition is changing rather rapidly right now compared to where it was very recently I think the point you make about artificial neural networks one of the things that gets back to my point before is the difference between performance and competence If a human has a certain level of performance they have a competence which includes some ability to explain what they're doing or at least to rationalize what they're doing Most of our AI systems do not have that yet which leads me to think that they are Sorry, Stuart a long, long, long, long, long way from consciousness I think this is a good question My own research actually is in methods that are somewhat different from artificial neural networks where they are, in some sense, inspectable You can understand what it is the system knows and you can actually ask it questions about what it believes and why it believes that With a neural network technology it really is a black box You have a system with 100 million elements Each of those elements is adjusted in various directions The parameters are changed If you look at the Google vision systems, for example that are quite good at recognizing different categories of objects how they do it Nobody knows All we know is they do it pretty well in many ways as well as a human being can given the same examples They've done some little experiments where they look at, they call it dreaming where you can have the system sort of generate images from its own internal circuits and then you look at some of those images and they look funky, they look weird they have eyeballs on stalks and strange creatures which are mixtures of lions and cats and dogs and so on but that's about as much as we know There are some internal sub-parts of this large network that perhaps describe different parts of objects and somehow they're combined and mixed together to do recognition If you're concerned about making sure that the machine is going to behave in predictable ways this may not be the best technology to achieve that We're rapidly running out of time but I just want to get a couple more questions and I'm going to go back to one of your comments you made earlier about the community not really putting a thought into the risks What can we do to future-proof this situation? Is regulation of this industry, this science possible? By people writing on whiteboards and writing mathematical formulae you can't pass a law saying as soon as you get to alpha plus you've got to stop right there in the middle of the equation you can't go any further But regulation could at some point be applied for example if you're going to put a trading system into the stock market it might be reasonable that that trading system satisfies certain properties so that we know it cannot destabilise the market in the way that has happened in the so-called flash-crash which was a result of algorithms that were not particularly well-designed interacting with each other in ways that nearly destabilised the real economy and it was only because they built-in circuit breakers that essentially stopped trading on the stock market and then unwined all those trades that they managed to rescue the real economy from what would have been a big disaster So it would seem entirely reasonable that either by self-regulation or by legislation trading algorithms on the stock market should satisfy certain properties that prevent those kinds of runaway behaviours You can imagine later on with self-driving cars we would want again the same kind of assurances that those cars have been properly designed the algorithms are properly designed so that they know how to react just like we have driving tests for humans now before they're allowed to go out on the road we may need something a more mathematical kind of test to show that a self-driving car does the right thing In the longer term, it's much more difficult because we don't know the shape of the AI systems that we'll see in 20 or 30 years' time what kind of regulation would make sense The other thing that we can do to future-proof is actually to change the way we educate the engineers in the field I think at the moment if you look at civil engineering the idea that a civil engineer would design a bridge without thinking about safety at all is completely bizarre The meaning of the word bridge means something that you can walk on without it falling down but the same is not true for software and AI software so there needs to be a change in the culture of the field so that when we build intelligent systems we are thinking about their effects on humans and making sure that they are safe and predictable and that requires building in those things into the education system Very briefly, if you can I agree completely that we can't regulate science because we don't know what's going to be good and what's going to be bad regulating deployed systems, absolutely I disagree somewhat with the emphasis that Stuart puts on mathematical proof I think that's going to be very difficult we've had horses in our societies for thousands of years we had no mathematical proof of what they would do but we had a general understanding of their performance characteristics we knew don't stand right behind a horse we knew don't let a match next to a horse's eye so we knew the parameters under which horses were generally safe and if a particular one was a real problem they were culled from the herd and gotten rid of so I think that will be more how many of us a lot of laws were written around horses and how they could be controlled but no mathematical proof To write things off because this is what we do at the forum we look at opportunities and challenges looking not too far into the future but both give us your greatest opportunity and the greatest risk in the coming year or two from artificial intelligence in as few words as possible It takes me that long to get through my email so I think one of the things I'm most excited about is the potential for progress in understanding language and that we will move, not in a year or two but in let's say five to ten years we'll move from the current generation of search engines which index the web but don't understand anything to systems that in some limited ways can understand everything that they read a system that has read essentially everything that the human race has ever written and extracted information from it and synthesized that could be ten times more valuable to human society than search engines have been and search engines have been a very valuable addition to our civilization so I'm very excited about that possibility in terms of risks this is a risk in the next year or two is the development of autonomous weapons where nations and companies are moving quite rapidly towards the capabilities for robots to be used in warfare in ways that are not directly under human control and I know that the Chinese ambassador for example at the United Nations in Geneva has expressed grave concern about these possibilities as has the Japanese ambassador, the German ambassador and the Vatican, the Pope is strongly opposed to autonomous weapons so this is a question where we need some serious policy discussions and it has to be done soon because otherwise there will be an arms race and it will be very hard to get back to the status quo from that In terms of opportunities, I think the opportunities in Europe in North America and China are around the aging population in all those societies, of course Japan in all those societies there's been a demographic inversion going on not enough younger people to look after older people so AI and robotic systems which let individuals maintain control of their lives longer I put the driver assist technology as an example of that it lets people drive as they get older longer than they would without those driver assist technology so it lets them keep their independence I think there's going to be a tremendous pull across Europe, Asia and North America for technologies which let the elderly maintain their lifestyle and their independence longer My big fear is there won't be enough robots The risk is not enough robots are being manufactured What a fascinating session we could be here all day I'm sure but we'll have other meetings and appointments to get to Thanks so much gentlemen, it's been a real pleasure having you here for our first issue briefing of the morning Thank you all for joining us We're actually here right back here in ten minutes talking about the digital transformation of industries and looking to have a digital revolution transforming society in ways that we're not even aware of so I hope you can join us but for the time being this session is now closed Thank you very much indeed