 Hello and welcome to this session, it's a collaboration between World Economic Forum and Arirang TV. I'm Kanyang Jennifer Moon and thank you everyone for joining us today. Now this is a session on artificial intelligence and as you all probably all know, artificial intelligence is no longer a science fiction movie, it's actually all around us. So we will be discussing artificial intelligence, the state of it today, and its benefits and risks, near term application, as well as what we need to be prepared for the future of artificial intelligence. I have a wonderful team of panelists here and I'd like to introduce them to you. To my immediate left is Stuart Russell, Professor Stuart Russell, he's one of the leading minds in computer science and artificial intelligence. He's with the University of California at Berkeley, so welcome to the panel. Thank you very much, nice to be here. To his left is Matthew Grubb, he's with Qualcomm. He is the vice president and excuse me, actually he is executive vice president, right? Am I correct? Okay, and chief technology officer of Qualcomm, welcome to the program. Glad to be here. And to his left is Yacin Zhang, he is a president of Baidu. Baidu of course, it's the largest search engine in China and it's also investing a lot in AI these days and we look forward to listening to your stories about AI. Nice to be here. And to his last but not least, Professor Andrew Moore of Carnegie Mellon University, dean of the School of Engine Computer Science, and we'd like to welcome you as well. Thank you. Now before we delve in any further into the topic of artificial intelligence, for our viewers we've prepared a brief rundown, a brief video of what is artificial intelligence. Let's start with a question. What do you think the most complex object is? Let me assure you the answer is in your head, literally. That's because it is the human brain, the most complex networks, the most powerful systems cannot match it. Changing that is the ultimate goal of artificial intelligence. It is not about building a robot, but creating a computer mind that can think like a human. But there are many steps along the way. So-called narrow AI systems are already everywhere, from Apple Siri to Facebook's friend recommendations. It's in our cars, our homes, the financial markets. And narrow AI has been around for years, doing one specific task better than any human. The supercomputer Watson beat the two human world jeopardy champions back in 2011. But ask it to play poker, it wouldn't know what to do. It couldn't learn a new game for itself. It couldn't think as a human. So we come back to the challenge that some say is the danger of creating an artificial general intelligence, a computer mind that thinks like a human, that improves, that learns, that can even exceed human levels of intelligence. Some predict we could see it by the year 2050, others even sooner than that, if it is even possible. It's a race worth billions. Some say it will save humanity, others say it could destroy us. Either way, if and when it happens, the world will be changed forever. Right, so a very important topic that we are discussing today, because it will change our lives forever. So Professor Moore, where are we today in 2015 or 2016, but where were we in 2015 in terms of artificial intelligence? How smart is it? So Jennifer, 2015 was a very big year for AI. Part of the reason is that the kinds of massive scale machine learning, which previously only Googles and Microsofts and Baidus could do, have become available to many researchers through advances in computer technology. The other big thing that's changed is that many folks are actually leaving those kinds of companies to do new startups, because they see such a wide frontier. For me, the big lessons of 2015 were one, emotional understanding. Computers have always been very good at sort of these emotionless games like chess, but now they're very usefully helping with autistic kids, with education and other places in reading what's going on in your face. And throughout the AI world, this is swept dramatically. The other big thing, which has been going on behind the scenes, but you don't see it in front of the cameras so much, is the gradual work to remove the boring parts of white collar work. This is a very hot topic. So for example, in the legal world, there's many startups now taking away the boring parts of understanding millions of legal documents to prepare for a case by actually having computers read and understand what's in the document. And over and over again, you see that in the business plan and in the academic world, is how we're going to get rid of the boring parts of white collar work. It's not going to kill it, but it's going to get rid of the tedious components of it. One particular thing which many of us find fascinating is lots of humans are employed in negotiating with other humans, negotiating trade deals, negotiating to buy a car and so forth. That was a big step in 2015 was computers which can negotiate, discuss with other computers and humans without actually taking the assumption that they're going to tell you actually what they're really thinking was a big one. It culminated in the first major human versus AI poker game, which is very important because it's the first game where it's all about deceiving your opponent while you're playing it. So that was an exciting aspect. So that's 2015 and now we're moving on I think into a very exciting 2016. Professor Russell, do you want to add to that? I'd like to mention a couple of other things. So self-driving cars were shown in the video and 2015 was the first year where you could actually take your hands off the wheel and have the car drive for you and that's been a dream for AI. John McCarthy proposed this as a goal back in the late 1960s for the field and that's been successful. As Andrew mentioned, progress on understanding language in the legal area has been very important. If we extend that and the interesting thing about machines is if they can read one document, they can read another one and another one and pretty soon they've read everything that the human race has ever written. So they might not be able to read it better than a human but they can read a heck of a lot more than human beings can. So a search engine such as Baidu or Google are incredibly good at processing all those documents and indexing them and sometimes returning useful ones when we put in a query but they understand little or nothing about the content of the document. So they can't really answer your question and they may be giving you back a document that contains the wrong answer to your question. Whereas if the systems have really understood everything that the documents contain, at least in a factual sense, then they can be far more useful. If the search engine industry is worth a trillion dollars roughly right now, this new level of technology could be worth ten trillion because it'll have so many more applications and be so much more useful to so many more people. Another interesting consequence of the ability to understand language is that, for example, Siri, which is cute but really doesn't understand what you're saying. It's really, in some sense, a prepared set of answers to a prepared set of questions and as soon as you get outside the prepared set of answers it says, oh, let me check on the web and see if it has something useful for you. But if Siri actually really understood your question and really had access to a lot of knowledge and is able to listen to all your phone calls, to understand all of your emails, to listen in on your person-to-person conversations because it's in your pocket, then it can be really the ideal personal assistant. So we're not talking about Big Brother extracting lots of information from your personal life and giving it to someone else. We're talking about a system which is there on your shoulder and can provide advice and help you navigate the complicated world. And many people here in Davos are CEOs and directors of large organizations. They have extremely capable personal assistants who, without whom, they would really be pretty useless. But imagine that that capability, a very professional personal assistant who knows everything going on in your life and can say, oh, you really should cancel that appointment because something more important needs to be done or don't worry, I've taken care of electricity bills and, oh, and by the way, the kids, you know, I've ordered lunch for the kids at school and all those things. That capability could be incredibly valuable for people with much fewer economic resources because they're the ones who really face a struggle navigating the complicated world where they have two jobs and single parents and all the rest. So this technology could be a wonderful boon for billions of people around the world. And that includes myself. I think I would need one as soon as we get that out in the commercial world. Yeah, Chen, Baidu has been investing a lot in AI. I know that Baidu also opened an AI research center in the Silicon Valley. So how are you adopting this in the industry? So Professor Russell mentioned that this will be a much valuable market, a much bigger market. How do you plan to, I guess, monetize that? Well, you know, AI is really becoming a mainframe and it comes to center stage in the last few years. 10 years ago, 20 years ago was pretty much in the lab. For Baidu, in fact, it's essentially embedded in every product, the service of what we offer, voice recognition, text-to-speech, machine translation, a search engine, an advertising platform, and also a time-striving. And we have an effective platform which will open to all the teams within Baidu and make that available, in fact, to all the researchers in the world. So to the point, you just mentioned that, in fact, small companies, startups, actually can use the capabilities of large companies like Baidu or Google or Microsoft. Because to AI, there are a few resources you need. One is obviously a huge computing power, lots of data, and that only big companies can build. Right now we have the world's largest deep neural nets that we're happy to share with the rest of the world. So open AI, I suppose, and making that data available for, I suppose, even aspiring developers so that we could make advancements in the AI research. Qualcomm, you displayed some very impressive products at CES just last month. And Qualcomm is doing a lot in terms of employing this artificial intelligence in your chips and microprocessors. So one aspect of that is we're starting to see these technologies move out of the data center and out into the world. Autonomous vehicles, an example. It really is the case now that I have friends at the office that have these new vehicles that you can now purchase, not prototypes, go down the road, take your hands off the wheel. That's remarkable. And the advance there is allowing us to put that kind of capability into embedded devices, vehicles, phones, tablets, but also all kinds of internet of things. That may, some have a connection to a large data center and can benefit greatly from that, but some do not and can still take advantage of these techniques in a very local fashion. So it's the very widespread nature that we're very excited about. So of course when we think AI, I think, like Professor Russell said, one of the key things that pops into our mind is driverless cars. So would it be safe for me to throw out my driverless license now? And because I'd love to. Driving is such a hassle for me. And autonomous cars, if they can help me get to where I want safely, I'd adopt that. I'd buy that any day. I think it's going to be a few years before you can throw away your driving license. Right now, Tesla's autonomous mode is constrained to be only on the highway. It's not allowed to operate on city streets where there are lots of pedestrians and construction and people accidentally reversing the wrong way down the street and all kinds of things. And the reason is that although the perception is quite capable, so it's able to detect persons, other vehicles, obstacles, policemen giving signals, road signs, traffic lights and so on. The decisions about what to make are currently made by, what we would call in AI, a good old-fashioned rule-based system. So there's rules that say if such and such is true, then you need to stop. If such and such is true, you should change lanes to the left. And every so often, you find a situation where the rules don't apply. So you're driving along the road and a cyclist is coming the wrong way in your lane, you know, slightly to one side and the Google car gets confused in that situation. I was told this by the head of Google X because now it's not sure whether it's on the right side of the road or the cyclist is on the right side of the road. And so the rules don't apply, he doesn't know what to do. He says, OK, human being, please take over. Which is fine if you're there with your hands poised to take over and you're paying attention, but if you're checking email on your phone or playing cards with your friend or whatever, it could be catastrophic. So a different approach to the overall control of driving needs to be taken and that approach involves not just rules, because the rules tell you what to do but not why. It tells you don't crash into that pedestrian, the car doesn't know why. It doesn't know that people don't like to be crashed into. It just says there's a rule saying, if there's a pedestrian, don't crash into them. But it doesn't know why. And to deal with unexpected circumstances, and this is something we learned a long time ago when we first worked on chess programs, you can't build chess programs by rules. What you have to do is endow the machine with knowledge of how the world behaves, how the pieces move, what the opponent is likely to do and the value of a situation that you might reach if you took a certain course of action. So the same basic design needs to be applied to driving. The system needs to understand, well, if I change lanes, I will be over there and I will avoid a collision with this object. But I may get rear-ended by a car that's coming up fast in the outside lane. And then it has to make a trade-off. Should I get myself rear-ended by another car or should I possibly risk knocking over the cyclist who's coming the wrong way? Or should I slam on the brakes and hope for the best that I stop in time? So this involves looking ahead, it involves making trade-offs among different possible things that could happen and also weighing up the probabilities because you can't necessarily predict with perfect accuracy what exactly is going to happen. And this kind of decision-making technology is being developed and there are companies working on it, but it will be a few years, I think, before it reaches the level of maturity that you would need to get government approval to go out there and take away the driving license. So I think you described to us the limitations to the artificial intelligence technology that we have today. And I'm sure, Matt, Qualcomm, as you incorporate these artificial intelligence into your chips and processors, you face limitations as well. So for instance, what can an AI'd smartphone not do for me at this point? Okay. Well, it's been said that if AI is a rocket, then the engine is the algorithms and the fuel is the database. You really need to have the combination of algorithms and computing capability with a database of data that you can use to train a device. So what are the limitations to your question on a mobile device? Well, you have constraints on both of those. If it has a cloud connection, it can benefit greatly from training and computing power that's available there. So what do we do with all this? To think of some examples, we now have phones and capabilities that have been commercialized or out of the lab to do things like categorize images. You take a picture with your camera. You can set, you know, whether it's landscape, sports, night, portrait. Now the device can actually do that for you and actually do a very good job with a deep learning algorithm that's been trained on a database. So that's an example. It can recognize faces and shapes. It can recognize handwriting. And again, it's relying on that combination of algorithms and database. So those are the two constraints. That's what we're doing research on trying to improve the algorithms, improve the capability of the hardware and improve the ability to absorb information from previous tests and experiments and take that into account. And a database or a volume of database is probably something Baidu doesn't have much problem with. We have a lot of data. We have a lot of users and a lot of scenarios which generate data. Let me just get back to the autonomous driving. It's certainly very fascinating and we just complete our first road test in Beijing from our office and that actually includes the locals and the highways, the beltways and without any human intervention. And I was able to drive up to 100 kilometers per hour. I was able to do all the sophisticated things that people normally do. But however, it's going to take years to become commercialized. I completely agree. Only the computer vision, you have to detect objects, you have to know where the people are. But also there's infrastructure and events that's needed. You need a very different set of mapping, a high-precision mapping. You need a more accurate positioning. So the car we have actually has a reader scanner that does all the multi-sense real-time fusion, you can put a position accuracy up to a few centimeters which really requires a lot of investment and infrastructure. So it's quite interesting. But this is faster than most people think. It won't take 20 years. My team proposed a research project to me and said, well, this is fun. We're going to invest. This is an AI problem. This is a mapping problem. We're going to take maybe 15 years. But right now it takes probably shorter than that. Amazing. Right. I mean, a fourth industrial revolution, artificial intelligence, is part of that. And of course, Professor Klaus Schwab has mentioned numerous times that fourth industrial revolution, the unique thing about that is how fast it's evolving, how fast it's approaching. And I guess the question in probably everyone's mind here is, will machines become smarter than humans? Because at this rate, wouldn't that be possible soon? We're all going to argue about what we mean by smart. But one by one, you are going to see things which we thought required our own personal ingenuity turning out to be things which can be automated. Many professions which we thought were smart, and I'm going to actually go out and eliminate here and say the lawyer profession or the doctor-physician profession, there's a lot that can be automated there and those careers might diminish. There are some other areas which we're going to be using AI to help the humans who will remain in charge, such as teaching small kids or nursing or things which involve care and really deep social interactions with other folks. So I do see quite terrifying changes in the makeup of the population, but the things for people to do armed with these intelligent assistants sitting on your shoulder are actually going to get more interesting, not less interesting as a result. Right, so I think so far we've mostly focused on artificial intelligence being an aid to humans and not a threat. I hope we don't have any lawyers or physicians' doctors in this room because Professor Moore just gave you a warning there. So what is it that humans are worried about then? Because oftentimes we come across articles, come across opinions where we say, whoa, whoa, whoa, wait a minute. Do we want to, I guess, make these machines smarter? What if we can't keep them in control? So this is a longer-term question and the worry arises from the possibility, as Andrew just mentioned, that machines may become smarter across the board, that they will develop general-purpose capabilities. And just to give you a little example, this year DeepMind, which is a company that was recently purchased by Google, demonstrated a learning system which in some sense resembles a newborn baby. It has absolutely no pre-programming of any kind for any task. And they expose it to the screen of an Atari video game and its only goal in life is to score as many points as it can. It knows nothing about the content of the screen when it begins. It doesn't know that there are moving objects. It doesn't know that there's such a thing as time or space or death or blowing up or aliens or spaceships or anything like that. So it's given this screen as just like a newborn baby opening its eyes for the first time. And within a few hours, it's able to play most of the Atari video games at a superhuman level. So this is a nice demonstration of generality. These games include driving games, shoot-em-up games like Space Invaders, complicated strategy games involving mazes and finding paths through complicated situations. So there's a wide variety of what we would think of as mental skills involved in doing well in these games. We don't actually know how it plays them. That's another interesting thing. These large deep learning networks are pretty much completely inscrutable. We don't know what they're doing, but we know that they're playing at superhuman levels across a wide range. So if you had a newborn baby that woke up on the day of its birth and by the afternoon was playing Atari video games at superhuman levels, you might be a little concerned about that. And so that's just an early taste. We know that those techniques, although they're effective for Atari video games, don't extend to all the kinds of cognitive tasks that humans do. But it might only be a small number of breakthroughs between now and general-purpose learning systems that could take on the full range of human cognitive tasks. The interesting thing about breakthroughs is they're very hard to predict. I think trying to predict things based on Moore's law and saying, okay, in this many years we'll have this much CPU power and that's equal to the human brain, so therefore we'll have human level intelligence. This is a really not very convincing argument. But in the history of nuclear physics, there was a very famous occasion when the leading nuclear physicist, Ernest Rutherford, said that extracting energy from atoms was impossible and would always remain impossible. The next day, Leo Zillard invented the nuclear chain reaction and within a few months patented the nuclear reactor and designed the first nuclear bomb. So sometimes it can go from never an impossible to happening in less than 24 hours. So what I would argue is that the possible risks from building systems that are more intelligent than us are not immediate, but the need to start thinking about how to keep those systems under control and to make sure that the behaviors they produce, the decisions they make, are beneficial to us. We need to start doing that research now. And just to give you an analogy, if someone said, well, you know, a giant asteroid is going to crash into the earth in 75 years time, would we say, oh, you know, let's, you know, come back in 70 years and we'll start thinking about it? No, we don't know how to destroy the asteroid and so we would start working on it now to make sure that when the asteroid arrives, we have the technology we need to keep the human race going. So I think the analogy can be made to the possibility of superhuman AI just from common sense. You know, if you're a gorilla, are you happy that the human race came along and they're more intelligent than us? How are the gorillas doing right now? Probably not too well. So there's a common sense idea that having things smarter than you could potentially be a risk. The particular risk of having systems smarter than you comes from the fact that when you give a very, very intelligent system an objective, and let's hope we give them objectives, let's not leave it up to them to decide what they want to do. Let's make sure that they follow the objectives that we give them. The difficulty is we don't know how to specify objectives very well and King Midas found this out a long time ago. He said, I want everything I touch to turn into gold. He got exactly what he asked for. His food, his drink, his daughter all turned into gold and he wasn't very happy about the result and it was irreversible. And when you give an objective to a machine that's much more intelligent than you are, it's going to carry it out. It's not going to want to be turned off if you turn it off, it can't achieve the objective you gave it. So you're essentially setting up a chess match between the human race and machines that are more intelligent than us and we know what happens when we play chess against machines. Can I jump in here with a couple of comments? Sure. I run a large A.I. university and one of the college within the university and one of the faculty and the students often come worrying about this issue. So on the one hand the systems which we would call narrow A.I. which they're building at the moment, the reason the students and faculty are so passionate and in fact why a bunch of them haven't gone just to work for Google or Baidu is they see ways to save lives right now. If we could reduce the number of deaths on the road by a factor of 100 through intelligent cars, if we could have it so that a poor person who does not have access to elite medical advice can actually talk to an assistant which gives them that kind of advice. If we can have very effective teaching where our kids actually educated with humans being helped by education, many of us feel like we can see a safer world and a happier world for the current generation who seems to be facing a lot of problems. But at the back of our minds we are very concerned about the question of the safety of the autonomous systems. For us at the moment with narrow A.I. our main effort goes into making sure our systems are safe in that they're going to do what they say they're going to do. The existential questions of the possibilities of superhuman robots are up there with things like the grey goo caused by nanotechnology or the dangers of broadcasting to outside the earth's solar system and aliens coming to get us. There are actually things that's worth looking at us but for many of the students and faculty working on this the thing which they're really rolling up their sleeves at the moment for is to save in some cases hundreds in some cases millions of lives through the use of technology and it's a very active debate that we in the A.I. community are engaging with all the time. Let me ask you business and industry leaders do you agree with this because you just jumped into this new realm of artificial intelligence invested lots of money in it and hopefully you'll get some return on that right and we've just gotten messages of precaution from the two professors. Well I think we have to have a caution I was kind of thinking through all the sci-fi movies during this description I don't think we have anything to worry about in the extremely near future in terms of machines taking over there's a lot more positive potential I think you listed some of them very nicely to make cars safer to make medical devices safer and work better, make better decisions to empower those that don't have the resources to talk to the best doctors or whatever to be able to still make those kinds of decisions we have to be mindful of security that's very important we have to be mindful that these devices are really functioning the way they were designed and no one's come in and changed that and even there one can use artificial intelligence to improve security and so that's another facet that we're exploring to a great extent for example if you turn on the flashlight on your phone and start sending your contact information to another country an AI agent can recognize that as unusual behavior and say hey that's something odd is going on there you might want to check into that so in the very near future I don't think we're going to have some of the science fiction scenarios they could of course happen and I agree you never want to say no this will never happen because that's often not the case but in the near term the applications for this are just profound a lot of talent going into startups and research in this field a lot of interest at our company in this field and so it's a very exciting time well you know for any company or universities doing AI research you have to be aware and be concerned of the overall direction and the strong AI that exceeds the human intelligence obviously that's a legitimate concern a question to discuss and actually it's well articulated your two letters that we need to make sure AI is responsible and controllable but in the short term obviously for industry we invested in narrow AI that solved real problems it's really augmentation of the human intelligence however I'm concerned about another site of the coin you know as a machine becomes more intelligent as we have more dependency in this complex machines in a human being actually becoming less intelligent in a sense we try to find information from the search engine before soon we don't know how to drive become lazy people lazy that's right so we don't think as much as we normally do so that is a concern to have the other concern I have is essentially the social behavior change there are things which are very foreign to what we normally think or decision we make let me give you a small example about a month ago my wife and I we were driving from Seattle to Vancouver and probably in an hour or two hour on the way we got a call from a surveillance company they said there's a broken intrusion to your house and then we drove back the police was there and there was no sign of intrusion nothing happened then later on they played back the video there was a rainbow the cleaner somehow it got triggered it was a cleaner house so we were going back to the seats but things like that really strange very interesting but as a machine become more autonomous have more robots at home and work those are the things we need to adapt right let's move on to so we know where AI is now we've also briefly touched upon what kind of areas that we should really keep our really toes on and stay precautionary but what would be the next game changer in artificial intelligence both from the academia and in the practical sense from the industry who wants to answer this so one thing which is really embarrassing all of us who are roboticists is we're really good at vision now and robots which learn robots which pick things up we're actually still sucking we haven't progressed very much in the last 10 or 20 years so many folks in the robotics world are really working hard now on the simplest question of reliably picking up a drink once we have that there's the prospect that maybe tens of millions of people around the world with impairments which are actually damaging their everyday life can actually have robot arms or other things on their wheelchairs or maybe even eventually in an exoskeleton this will become realistic but we still need to solve this fact that manipulation for us is a lot harder than for instance driving a car down a freeway at 70 miles an hour can I just clarify this so I was speaking with a a professor in Korea he's the one who invented Doi Mang is the one that picks up different things right and he told me that artificial intelligence or not so simple act of picking things up are different I don't think we need to worry about what to find as artificial intelligence it just turns out over and over again that things which we thought were really fancy and clever like playing Atari video games turn out to be quite easy to implement but other things which we thought were should be pretty easy we all think that it's easier to pick up a glass than to drive a vehicle turns out to be the other way so these are the interesting things that happens this one is a very important one for artificial intelligence because at the moment when you look at where robots are being used they are being used as these mobile platforms mostly for inspection and surveillance but they actually have a lot of trouble doing useful things like we feel like we have some catching up to do that I think it's a chicken and egg problem in that to really be as dexterous as a human being the robot needs to have very very complicated hands the human hand is an incredible machine with millions of sensors with millions of control fibers many muscle groups and we don't have robot hands that are anything close to that degree of complexity it's very very expensive to develop that technology and at the moment the control algorithms don't exist for that technology because you can't buy one of those hands to even develop the control algorithm so we're lacking on the physical hand side and on the control side because of the chicken and egg problem it's possible I think that 3D printing might provide a breakthrough because with 3D printing you can actually develop manufacture and test much more complicated physical devices than was possible a decade ago where you'd have to basically gear up a whole manufacturing process line before you can produce a robot hand so that might be the thing that breaks this log jam and then we'll see for example robots that can successfully pick blackberries in the wild so that I can make jam without having to spend 24 hours picking blackberries so applications in agriculture as Andrew mentioned in elder care for example these are quite feasible and very important I'm going to ask you directly Matt so what does Qualcomm have its eyes fixed on for your next game changer in artificial intelligence well we look at the mobile use cases where you have a device that has a wireless connection to the cloud but also has a lot of local processing so we're looking at applications for smartphones recognizing images and acting upon them or looking at your context are you busy a calendar that needs to change are you moving and being able to formulate and do that personal assistant type of function effectively that's very very hard there's a lot of subtlety you have to have again an algorithm and a database you have to understand how you want things to be processed and maybe how you've done it in the past so those are examples and into the robotics medical devices to do diagnosis or to navigate or the self-driving vehicles we're into all of those things and they're all very good playgrounds for artificial intelligence machine learning both connected to the cloud and mobile and Baidu obviously we're looking at the search, upon driving user interface, personal assistant but also we are investing in technologies that could be applied to finance and healthcare in finance actually for example insurance and consumer loans AI and machine learning can help you identify all the patterns that can help you reduce the risk finance is all about risk management risk control we work with lots of universities to make sure the technologies can be used for drug discovery for all the the gene sequencing that help really move the life signs and also for us we have products which already use machine learning for doctor appointments we have four terabytes of all the information it's all different kind of diseases and we use somebody trying to find a doctor trying to match that with the doctor but also able to do self-diagnosis before him so those things can help us get into new sectors and it's tremendous right fascinating now as promised I will take questions from the audience and we're going to use a very traditional way of raising your hands although this is AI gentlemen over there I saw his hand up first thank you it's a question for a third result but anyone really I'm really interested in how do you think AI will improve us how about super humans that we are talking about in Davos in our places will Ray Kurzweil be right will download our brains into a machine will I have a memory in plan soon in mind I really need it thank you that's a great question I'm not at all optimistic about the possibility of uploading our minds into machine hardware and living forever and one of the reasons is we have absolutely no scientific theory of consciousness whatsoever and there's no guarantee that anything would survive the upload process even if we could actually get all the information out but one of the things I think possibility in the not too distant future we've already seen a lot of progress on brain machine interfaces that allow for example someone who's completely paralyzed to control a robot arm to pick up a cup of coffee and have a drink and that's done by direct connection of electrodes into neural tissue and the amazing thing about that is that we don't understand the signals that the brain uses to control its its effectors right its arms and legs and so on basically we leave it up to the brain to figure out what signals need to be sent to this robot arm to have it do what it does it's not a conscious process but with a relatively small amount of training a monkey or a human brain I don't want to say the monkey or the human because the human doesn't know what's going on it's sometimes the brain is the one that figures out what signals need to be sent to this electrical system to get a robot arm to do what it wants so it's a small imaginative step I'm sure it's a big technological step from there to say perhaps instead of closing the loop through a robot arm and then back through the visual system we could have electronic memory storage devices that the brain would learn how to use to store information so that could obviously be very useful for people who are developing early onset Alzheimer's for example so it could overcome one of the biggest bottlenecks in human cognition which is our short-term memory our brains are only able to keep in the forefront of our minds about five famous phrases five plus or minus two things and so that's why it's very difficult to prove mathematical theorems it's why it's very difficult to imagine a sequence of 75 moves on a chessboard because you just don't have the short-term memory to keep that all together if we could multiply our short-term memories by a factor of 10 I mean that's only 50 things we could dramatically alter our cognitive capabilities and that to me is potentially enormously transformative whether it's feasible I don't know but it's the kind of thing I could see people doing experiments on in a decades time all right I think we have time for one more question I'm going to let this lady right here could you please identify yourself hi my name is Yuhyeon Park from Korea I'm a university researcher as well as a social initiative called IC HEROES digital citizenship program for children I have a question for all because when it comes to AI smartness I think calculator is even smarter than me right now so I wouldn't worry too much about the smartness the Einstein say if you want your kids to be smarter then you have to read the fairy tales I guess it is not the same approach to AI if you're coming smarter and smarter and I'm wondering about the technology is technology has to be developed to promote the humaneness for the convenience for healthcare and all that and I heard from Professor Russell it's really come down to the what is decision makers what is the set of objective and I think it is also lead to the governance and when it comes to just now internet governance it's not really perfectly perfect or safe for the minority especially for females for children and how do you see for see this AI data governments in the futures and what will be the ideal set that global multi-stakeholder collaboration that has to be taken place from the public side academic side as well as government side probably it will be a long way to go but I'd like to share your perspective I'll be the ideal case that we want to actually achieve briefly want to jump in on one really important solution is the example is not shown here you actually need AIs to be built by teams of men and women working together one of our big pushes and all of us are working on this is to make sure that all kids especially girls are encouraged into this area because we cannot have this built by one demographic group so there's big movements to make sure that the people who are building this amazingly exciting technology actually represent the population of the planet instead of just frankly a bunch of guys that's important the question of helping with education of the youngsters one of the big lessons we've been seeing in the last few years is that emotion understanding and when you're using educational devices to not have them be like monolithic robots but to have them actually have personalities and to react to the child while the child is learning really helps with the overall outcomes of these learning systems so one of the keys to having computers help educate children is to have the computers be and behave and react like humans alright I can take another question this will be the last question right here hello so this morning I attended the scariest session I've ever attended in my life and it was on cyber warfare which seems to be a very big problem they talked about how it has the lowest barriers to entry but the highest level of danger highest level of danger that in normal warfare you have one defender for three attackers and in cyber you need ten defenders for one attacker and then they said you know if you send an email with a hidden piece of code in it to ten thousand people saying click this to win five dollars some percentage of them are all going to do it so that seems like AI would be one of the perfect solutions to these problems and there are hundreds of millions turning to billions of dollars being spent on cyber defense and is that a big part of what this world is working on so there are really two applications of AI in warfare cyber warfare is one and autonomous weapons is the other and there are connections between these two but I think in the international arena these are being pursued in separate areas autonomous weapons means the ability of a machine to choose its own targets to decide where to go what's a target and to attack the target by itself and that has a capability that previous weapons have not had which is that previously with one weapon you need one person to control that weapon and launch it or whatever but with autonomous weapons by definition one person can launch a million of them and they can all have separate they can all choose their own targets so you create a weapon of mass destruction that is very easy for lots of countries and non-state actors to employ to catastrophic effect so I think there's a very important need to control autonomous weapons cyber warfare on the other hand is already going on and I think it's only a matter of time unless there's a very serious effort in the international community to control this it's only a matter of time before consequences serious enough to actually create a real physical war could occur and AI can be useful in detecting attacks but it can also be useful on the offensive side and that's one of the reasons why you need 10 defenders for every attacker because the offensive side can replicate you can have a million AI systems all trying to find ways into every part of the infrastructure of a country and that's very difficult to defend against so I would really encourage if anyone in the audience has any sway over these negotiations to take this very seriously thanks for the question I would like to really take more questions especially from this side because I kept looking to this side and I happen to choose the questions from here and I apologize to all of you but this is all the time we have so before we let you go I'd like to give every one of our panelists about 45 seconds as a wrap-up thought as your final thought before we end our session starting with Professor Moore Thank you so we are in a really exciting time and we now have hundreds of thousands of young computer scientists around the world the thing I like about them is that they are working towards using this advanced technology for helping us with many of the problems we've got social problems political problems, medical problems and it is one of the in my opinion one of the bright benefits that we have at the moment is that artificial intelligence is being used for good across the planet and I very much encourage especially youngsters get into this area it is the one thing which is closest to working magic at the moment Yes If you look at all the technologies for the next decades AI is certainly the foundation an engine to travel for other things so if you have a start-up if you invest in your business and consider AI that's a necessity for everything else Really I couldn't agree more I mean we've I think the video got it right that said that this is going to change our lives it really is and it's going to be by far mostly for the for the better and the good it's very exciting it is moving fast I'm not concerned about downloading my consciousness today I don't know that might not happen for a hundred years I will never say never but advancements that are pragmatic, useful improve the performance of our products, improve the medical devices, the diagnostics those are all upon us some of them are happening already and it's just a very overall positive movement we're very glad to be part of it So the way I think about it is that everything good we have in our lives everything that civilization consists of is the result of our intelligence it's not the result of our long teeth or our big scary claws it's from our intelligence so if AI as seems to be happening can amplify our intelligence can provide tools that make us in effect much more intelligent than we have been then we could be talking about a golden age for humanity with possibly the elimination of disease, poverty solving the climate change problem all being facilitated by the use of this technology So I am extremely optimistic that the upside is very great and that's the reason why we need to make sure that the downside doesn't occur Right you know we could talk about this for I think for hours but this is unfortunate all the time we have and one other area that I wanted to get into but we'll have to when until next time we meet is of course regulations are we ready to make these machines smart even if it's smart than us and do we have the means to regulate and control them so that they are used for positive purposes only so I think that's something that we should think about and maybe next year we'll be sitting here again and saying oh driverless cars that's so old news we all own them so it is moving at a very fast pace and it's one thing that we should all be keeping our eyes on right I'd like to thank our audience here for joining our session today I'd like to thank our viewers for this and of course I'd like to thank all of our panelists for this great discussion today and yeah let's let's consider AI like Yechin said