 Good morning, everybody. Welcome to this NHK TV session. I'm the moderator for this session, Hiroko Kunia. There's been a lot of media coverage on artificial intelligence, about the potential benefits to society, as well as potential negative implications. There has been tremendous progress on new algorithms, machine learning, machine vision, speech recognition, robotics, and also big data availability. And these things are creating huge business opportunities and attracting massive investments that has never been seen before. And this again is prompting more and more development of the technologies. Artificial intelligence has the potential to become a tool that is very powerful in making us think faster, making us more efficient in thinking, increasing productivity, and solving difficult social problems. So it will have a significant impact in our lives, and more increasingly so in the future. Today I'd like to explore what the new technologies are doing and changing our lives, and what's out for the future, and also how we can improve the chances of reaping the benefits while avoiding the negative consequences. Let me introduce our panel now. We have a great panel of five people. Let me start out with Mr. Goldblum. Anthony Goldblum is the founder and CEO of Kaggle. It's a company specializing in cutting-edge machine learning, and it is well known for its competitions conducted amongst 250,000 data scientists all around the world to provide solutions to issues brought by corporations using big data. Next to him is Professor Rodney Brooks, the founder and chairman of and chief technical officer of Rethink Robotics, known as the creator of Roomba. Do you know the floor cleaning robot? And now he's marketing the low-cost self-learning human-friendly industrial robot called the Baxter. By the way, Baxter is up on the screen right there. It's on the left side, and also, and let me introduce the next is Mr. Hiroaki Nakanishi, who is the chairman and CEO of Hitachi. Hitachi is now heavily focused on infrastructure solution business, which will provide solutions to social problems such as traffic jams or creating energy-saving cities. Next to him is Kenneth Roth. He's the executive director of Human Rights Watch. The organization sees new technologies that are rapidly being developed, could potentially infringe on human rights. And lastly next to me is Professor Stuart Russell, a professor at the University of California, Berkeley. He specializes in artificial intelligence and is the author of the world's best-selling AI textbook. So thank you all for joining us. First of all, I'd like to start with Professor Russell. You probably have a bird's-eye view on what is happening at the frontier of artificial intelligence. Why is there so much sudden, from a layman's point of view, I'm not a specialist on artificial intelligence, why is there so much interest in investing and so much coverage that is going on now and accelerating development of technologies? Can you give us an overview of what is happening now? So what's happening is that techniques that have been in the research process for, in some cases, many decades, have crossed the threshold from being interesting lab curiosities to meeting the needs of people in the real world. So from the outside, it seems like an explosion. Whereas from the inside, it seems like continuous progress that has crossed a threshold of usability. And we've seen this in areas like machine translation. We're going to see it in self-driving cars, for example. The first self-driving car was demonstrated in 1988. In Germany. And so it's been quite a long time gradually improving the technology, the reliability, the safety, until hopefully, within the next few years, we'll see that kind of technology crossing the threshold to usability in the real world. Can you tell us how much has investment increased? I'm sure you can't give us an accurate answer, but what is the feeling of the increase of investment? So if I had to guess, I would say that the commercial investment in the last five years has exceeded the entire worldwide government investment in AI research since its beginnings in the 1950s. So that's why technology is accelerating. So this is the cause of the acceleration. Right. Professor Brooks, you say that you are seeing the democratization of robots, which means that robots are going into our daily homes. And the backstreet you created is a very low-cost. It can be self-taught by ordinary people and it can work next to people. It's very human-friendly. Please give us an overview of the difference of robots, the traditional robots, and the significance of the robots that are upcoming now. The traditional industrial robots, which we've seen for over 50 years, were developed at a time where computers were very expensive and sensors were expensive. So they don't sense the environment and the traditional industrial robots we still see in automobile factories are just like that. Industrial equipment has not had the approach that say smartphones have had, where smartphones were made to be easy to use. Industrial equipment has been built by engineers, for engineers. What I've been trying to do for the last 20 or so years is make the advances in user interfaces available into industrial equipment, first-home cleaning robots, but also just behind me, one of the packbots, which we, an iRobot, we had 4500 of those in Iraq and Afghanistan for diffusing roadside bombs and they had to be used by 19-year-old soldiers who had very little training. So we had to make them easy to use. So when I talk about democratization, I'm talking about making equipment, which in former times was very complex, had to have a very high level team to operate, making it just like a smartphone, something an ordinary person can figure out in a few minutes. I heard you said that your backster is somewhat aware of people. It has common sense. What do you mean by that? Well, the common sense is very simple common sense. A traditional industrial robot, if it's, I'll get a prop out here, a traditional industrial robot, if it's programmed to move something from here to over here, it's just in its coordinates and if it happens to drop what's in its hand, it continues to move unless that was specifically put into the program. Backster knows things that it's doing a task. So if something drops, it doesn't continue to operate. It goes back to fix the problem and that's built into its software stack and understanding the world. Very simple rules like that. So it's not the level of common sense that a person would have generally. But if you were training a worker to do a task, you wouldn't say, oh, and by the way, if you drop the object, you don't have to continue moving. You'd expect any human to have that level of common sense. We're putting that simple stuff into our robot. Very simple stuff, very simple stuff, but it hasn't been there. Mr. Nakanishi, your company also makes robots. Another robot is just standing right there in front, the red robot. You just heard Professor Brooks talk about more human friendly robots. Being a big corporate executive, how do you see these friendly robots changing the manufacturing process? Recently, you know that the various autonomous operations that requires more intelligent capabilities. That's sometimes, you know, that the traditional robot is a very huge and the very special, the programming is scheduled already that Professor described. But the next stage of the manufacturing industry requires more flexible manufacturing capability to adjust of the various environment. It's kind of the human beings. A part of the capabilities is to be installed in the manufacturing line. So that the final target is sometimes, you know, that the mass production but still the customization. Those are very much intelligent manufacturing case. The human type of the robot may have a very strong power to that. That's an industrial meaning for us. Mr. Nakanishi, your company stresses or focuses strongly on solution business of infrastructure. What are the technologies that are making it more possible to create the solution businesses now? What is the change in the environment you see that is taking off? The current one example is urbanization. Used to be when we built of the huge, large city, it took about 100 years or 200 years. But the recent rapid urbanization requires only the 20 or 15 years building of a huge city. In that case, totally the management is really the mandatory to setting up some of the energy, water, transportation, how many people is located, the population, the locations to watch it. Those kind of the more intelligent design for the future city, those kind of the infrastructure buildings, schemes are completely changed as well without any air pollution or any traffic jams, how to set up the whole comfortable environment for the futures. Those kind of the approach that we are seeking for through the artificial intelligence capability. But Internet of Things is a key, isn't it? Yes. I'm now talking about some of the location or human life at first. But now from the viewpoint of the monitoring of the transportations or electricity, all of the devices or machines to be connected through the Internet and watching it and to allocate the most appropriate resources for that. Those kind of the approach will be very, you know, dreamful for futures. Okay. Mr. Goldblum, your company is cutting edge on machine learning. And can you give us some examples that have been solved, some concrete examples that cutting edge machine learning has been able to solve or provide answers to which humans have not been able to do? Sure. So our company really works, I position us as things come out of the university lab and we're the first point before they get into more mainstream use inside companies. And so we're starting to see a lot of the more advanced uses of machine learning that are starting to get out into the world. Some of the things that we've done that I'm most proud of include grading high school essays using algorithms, which there's a little bit of nuance to what's possible with regarding grading high school essays and what's not possible. But if you have a standardized test with a large corpus of essays, it is possible to train machine learning algorithms to grade essays more reliably than teachers even. We've done work in the medical field taking EEG readings and we're able to predict an hour before somebody has a seizure that they're going to have a seizure with 82% probability, which is you go from essay grading where the raw material or the raw data is text through to EEG readings where the raw ingredient is brain signals and it gives you just a sense for the breadth of what machine learning can do today. So what is the technology that is making it possible? Yes, so I argue and Stuart spoke about this well and in the lab there's been a lot of activity around machine learning. In the 1990s, a set of techniques called neural networks that tried to map the way the brain functions was the predominant machine learning technique. And then during the 2000s, a class of techniques that I generally classify as ensemble techniques really emerged. And these techniques are very powerful because they're the kind of techniques that are very robust that you don't have to be a professor at the University of California, Berkeley, in order to be able to operate. They're much more, they're very robust techniques that are less qualified user can use very effectively. And I think that's been a really key development in the democratization of machine learning and some of the new applications that we're seeing spread throughout industry. Mr. Roth, you're not in the AI business or in the machine learning business. On from your perspective, how do you perceive the recent developments? Well, my concern is whether we have developed the ethical and regulatory framework to ensure against misuse of these new technologies. I mean, obviously there are huge advantages to our day-to-day lives. But if you look at the various applications, I mean taking in warfare, we are now facing the possibility of what are known as fully autonomous weapons. That is weapons that can be basically sent off into the world and kill without human intervention, without somebody actually pressing the button or pulling the trigger. These so-called killer robots, do we want that? You know, can machines, even with all these advances we've heard, have the kind of refinement of judgment to decide whether a man in the field there with a rifle, is he a combatant, or is he a farmer in a dangerous situation trying to protect his field? Or is that child stumbling toward the front line, somebody who's lost, or a potential suicide bomber who's been sent on a mission? These are very difficult judgments and you need, I think, human capacities. You don't want machines deciding to kill or not. Or if you look at some of even the sort of mundane applications in the home, you know, yes, to have a robot vacuuming is lovely. The law right now says that all the data in my phone is available to the government because I shared this with the internet company. Now that's crazy. I didn't share it so the world could know, but the current law says that because I had no choice but to have the internet company have access to this data, I've lost my privacy rights. Now what happens when that goes into the home? You know, imagine a robot that's collecting a video image of your home, this, you know, recording everything that goes on. Are you sharing that with some company and therefore you've lost all privacy in your home? You know, we don't think that should be the case, but that's the current law. So we need to develop a moral and regulatory framework to avoid against real intrusions in our privacy or much worse as these applications are applied around the world. So about these moral or ethical or legal questions that may arise with the new technology, I would like to go back to this point later on that the very important points that Mr. Ross mentioned. But before that I would like to explore areas where we might be seeing more and more applications of robots into our daily lives and what effect that may have on our society. Professor Brooks, besides this manufacturing area, what other areas do you see that we may be seeing or using AI applications? Well, you know, in the past at iRobot we brought out the vacuum cleaning robot and that's the first entry into the home of robotics robots. But I think the real driver of the next 30 years is going to be the aging baby boomers. So I look at, say, a 2014 S-Class Mercedes as an elder care robot. It gives people the ability to drive longer and have autonomy in their lives, dignity, to drive safely than they would without it. I'll be able to drive longer because of these technologies in cars. I agree, I don't necessarily want all my data shared so there's privacy issues around that. And by the way, I think there was one point made which was in contradiction here. You made the point that a human is going to be a better judge of a situation and we're already seeing in machine learning that often the machine learning systems are better judges. So we've seen it for over 20 years in automatic braking systems. The automatic braking system is a better judge of how to break than a human can. So I think it's not necessarily true that a human will always have better judgment even in difficult classification situations. I think that's a bias that may fade away over time. So you're saying elderly society will have more use of the robots, healthcare? Right now cars coming into the home giving the very elderly the ability to live by themselves or with their spouse. But independently of having to get outsiders coming to their home or they have to go into high care, let them live longer. Because with the demographic inversion, there are just not enough people to provide those sorts of services that the elderly will need. And this will give them, I'm not talking about companion robots, I'm talking about robots that might help you get onto and off the toilet, into and out of bed. So you choose when you do those things rather than care, give it coming into your house and telling you when to go to the bathroom. Mr. Roth, you wanted to say something. And this issue of killer robots, I mean we often hear this argument that just, you know, precisely. It's one thing, you know, a car has to decide, you know, when do you break and when is that obstacle going to be in your way? And those are the kinds of physical judgments that undoubtedly computers can be programmed to make better than people. But when you're at war, the judgments are not simply these issues of sort of physical estimation, but they're essential moral judgments at stake. And I don't have faith that it's just a matter of better programming before we can make those moral judgments properly. Because, you know, a key impediment in war is human empathy. You know, the natural reluctance that people have to kill another human being. That's going to be very difficult to program. Or even, you know, even the more pragmatic issues, one of the big legal requirements in war is that if you're attacking something, the military advantage has to be greater than the potential civilian cost. Look at the two sides of that equation. I mean, military advantage is completely contextual. You know, bombing a bridge today may be very important, but bombing it tomorrow is irrelevant because there are other ways around the water or people have already passed the river. You know, civilian casualties, you know, how do you first of all decide who is a civilian? How do you decide whether, you know, your target, you know, maybe tomorrow, because you've seen a pattern, he would be someplace else where there wasn't a school bus at the moment of attack. You know, maybe there is, you know, who's to decide that this ring of people around the target, you know, is or is not sufficiently valuable to refrain from killing that person at that moment because of the collateral damage. I mean, these are very complex, contextual specific judgments. Mr. Russell, can I go back to that issue later on? And I just want to explore, you know, the impact of these technologies going into our daily lives first. And Mr. Russell, I mean, Professor Russell, there are, you know, automakers or Google are trying testing their automatic autonomous vehicles now on the roads. And the impact that may have on our daily lives perhaps are much more bigger than what we imagine them to be. What do you think can happen if really autonomous vehicles start running in their societies? So I think this is a very good question that there's generally a tendency to just imagine replacing some current function with an automated version of the same function. And I think this is the fallacy in the discussion about autonomous weapons, is that we're assuming that a machine is simply going to replace a human in some decision situation about whether to kill or not kill. And I know you don't want to talk about autonomous weapons right now, but just to finish that particular sentence, in fact, there will be an arms race. And the nature of war will change completely. And, you know, crudely speaking, the life expectancy of a human on the battlefield will be 10 seconds. So is this a direction we want to go? I'm not so sure. So back to your real question. Autonomous vehicles will certainly make life easier for people with disabilities who have trouble driving, as Rob said, elderly people. There are still some difficulties. So the evidence shows that the cars are incredibly capable at navigating the physical environment, at detecting other vehicles, pedestrians, street signs, and so on. So under normal circumstances, they can drive extremely well. And that's been demonstrated over several hundred thousand miles of driving both freeways and urban driving. But what's not clear yet is the rate of occurrence of situations where a human has to apply common sense in order to decide whether to proceed. So you can imagine, for example, coming up to an intersection where there's been an accident. And you have to decide, you know, should I turn around or should I try to squeeze my way through some path on the road when there are emergency vehicles coming and perhaps there are people milling around and so on. And many mornings during the last semester of teaching, I would come into class and explain some common sense situation to my students where I think a Google car might have had some difficulty. For example, deciding, is this a moving truck or is it a delivery truck? A delivery truck from UPS will be there for 90 seconds. A moving truck could be there for two hours. And so you might want to turn around and take a different route if the road is blocked. So these kinds of situations are difficult common sense judgments. And I think this may be one of the reasons why the date of rolling out for the consumer market has been moving a little bit into the future. But another thing is that you have told me that there may not be any more parking spaces needed, people may not be going shopping. Why would anyone need parking in a city? There's no need to park your car because your car can just go and put itself back in the garage and come and meet you when you need it. Or it can just like at the airport when you want to go pick someone up, you have to drive around the airport until they show up at the curb. Your car can do the same thing. So the need for parking would be greatly reduced. That changes the equation for public transport. Because then instead of having to drive to public transport stations, train stations and park your car, your autonomous car will drive you there and then go back home so you don't need huge parking lots. So it's counterintuitive that the availability of autonomous vehicles may increase the use of public transit and change the structure of our cities to some extent. You can also imagine that you would send your car to the supermarket to pick up your groceries so it makes the entire journey with nobody there. If I was a taxi driver I'd be a little bit worried because there's no particular reason to have a taxi driver. Elon Musk who's developing an autonomous vehicle in the Tesla corporation described driving in such a vehicle as just like taking an elevator. You get in, you push the button saying where you want to go. You don't think about the fact that there's a complicated algorithm that controls the motors of the elevator and makes it stop exactly at the right place. That didn't used to be the case. There used to be a person driving the elevator and often it was jerky and they would stop at the wrong place and have to go back down. And he says well once you've had that experience why would you go back to the old way of elevators where you had to have someone driving it? Can I just add something? It gets to some Kenneth's point and Stuart's point and I'm not going to talk about military robots but self-driving cars. I think one thing that is important for all these technologies is that they keep the individual empowered. And so one of the things that I think has been too little research done in universities for self-driving cars is how a person who's not in the car interacts with that car. Because if you're on a little country road at night and you're walking and it's pitch black and you hear a car coming, your only thing you ever do is get off the road because you'll never know whether the driver of the car saw you or not. Right now in daylight we make estimations of whether the driver has seen us or not whether we're in another car or a pedestrian. If they've got their phone up their ear and they're looking that way and you're over here you estimate they haven't seen you. The trouble with self-driving cars at the moment is they give no social cues so they become kings of the road and people around them have to deal with them. And I think we might see some acceptance problems unless they become more integrated with normal human interactions. Mr. Narvanay, would you like to make a comment? From the viewpoint of the automatic driving systems, there might be several stages. The totally untouchable automatic driving, we need so many kinds of the infrastructure before establishing it. But the current technologies are supporting of the various driving styles. I, itself, and you mentioned the sounds or those kind of detecting systems is most appropriate to driving styles can be changed. That's the real practical case for the autonomous driving right now. Mr. Goldblum, you've been probably cooperating with corporations. They get bringing the big data, they bring you the problem and you have 250,000 data scientists compete for the best answers, right? Having associated with many Fortune 500 corporations, what do you see are the hurdles corporations facing utilizing this technology? Probably the thing that we spend, I would even say the majority of our time doing is helping our customers get an intuition for what's possible with machine learning and what isn't. I think that the so-called Google generation, the generation that's always blown up with Google, has a much better intuitive sense for where they might be able to use a data-driven solution or a machine learning powered solution and where human intuition is still likely to triumph. And certainly to the extent that we deal more with the senior executive level, we find that that intuition is not quite so finely tuned. So that's one big stumbling block that we face. Another one is a little more mundane, but a lot of companies have grown up before the era of large-scale databases and so they've got a little bit of data stored in this database over here and a little bit over here and something in Excel over there. Just literally assembling the data sets and getting them into a point where you can actually do machine learning is unfortunately a real bottleneck and it's a bottleneck. It's an issue that older, more traditional companies face that digital native companies like Google and Amazon don't face. So you're thinking that data management is archaic? Yeah, archaic is probably unfair. I think that I have a lot of empathy with this situation. I think Amazon grew up in a world where all these technologies were available and they could, from the very beginning, have their data well organized in a nice structured database. Companies like Walmart, actually Walmart have done a very nice job of this, but other companies, more traditional companies, have had to... IT change is very wrenching as I'm sure a lot of people in the room will have experience. And so going from an older system to a newer system is not an easy process in a larger company. Mr. Nakanishi, you are in the solution business using massive data as well. What do you find in your cooperation, some of the hurdles that are sort of making it difficult? The target is really the most appropriate, the optimized solutions to be established. The data, big data and big data analytics, those two tools are really powerful tools for making optimized solutions for that. But before that is kind of the big issues for us to utilize of the big data. How can we gather through the collaboration with our potential customers or societies? If possible, we can analyze it through the detecting of the sum of the real key issues for the avoiding of the sum, the malfunctions and also the more appropriate solutions for that. That's our approach. So there might be so many cultural issues to be solved before applying the algorithms to define it. In the case of the mining or in the case of the healthcare, there is a big difference. In the case of the mining, it's really not included so human factors here, but in the case of the healthcare, how to deal with all of the data for each person. Sometimes privacy, what the protector of such a right is. Mr. Ross, you wanted to follow up on that. I mean, at a certain level, you think, oh, if you could use a big data analysis to predict who in this room is going to have a heart attack in the next year, wouldn't that be great? Because we could take preventive measures and save lives. And that's undoubtedly for the good. But again, there's a need for regulation because that same data in the hand of an insurance company, they would say, well, we're not going to insure you. Or we're going to charge you an arm and a leg because you're going to be expensive. And it completely undermines the point of insurance. So, you know, this is not an argument against the big data analysis. It's an argument for certain kinds of regulations in this new context. And I think, you know, to move into a different context, the latest people to be enamored of big data analysis are our security forces. You know, the intelligence agencies that they keep arguing, we need a bigger and bigger and bigger haystack so we can find the needle where that terrorist might be. And intelligence agencies have been so intent on collecting this massive data that they actually have overwhelmed themselves. If you look at the Paris attacks just recently, these were people who were known to the police department. But the police ran out of resources to follow them. They had other priorities because they are so overwhelmed by this data. And so you have to ask, you know, are we getting the right balance here between these, you know, this wonderful big data analysis and some of the old fashioned techniques needed to follow up. And I think right now we're actually swinging much too much toward just collecting everything beyond our capacity to use it with real costs to our privacy. Professor Russell, since the privacy issue has come up, I want to ask you, what should corporations do? I mean, data gives many, much, much value. It provides business opportunities. You can create businesses with big data. But isn't the aggregate data as important as you don't have to be so personal to be able to find the business opportunities, aggregate data, isn't it enough to create business opportunities? Why do you think corporations, do you think corporations are doing enough to make the privacy issue more safe for others, make us more feel comfortable? My view is that they're doing almost nothing. And there are two reasons for that. One is that there haven't been very many disasters yet that have really affected people's bottom lines, the corporate bottom lines, although there have been a few. And the other is I think people are just lazy. The companies use very simplistic solutions that don't protect privacy, even though there exist technological solutions that do. So individual data is very useful. For example, if I'm walking by a particular store and that store has information that I'm the kind of person who buys a particular type of product at that store, let's say a scarf, then maybe it wants to text something to my cell phone saying, oh, by the way, we have a really nice scarf in a different color from yours and it's half price today and maybe you'd like that because we know you lost your scarf last week. So I mean that you'd be a bit taken aback if you got that message, but I guess you might be grateful. But the point is that that can be done without ever, the company actually ever explicitly knowing that you were at that particular place and having to record that information. So individual transactions, individual advertising can be done in a way that's anonymized using encryption techniques. And these solutions are available, but they're just not used because companies take the more straightforward approach, which is sort of a database lookup of encrypted information. Why do you think they're so lazy? Perhaps they are not aware of the technological solutions. Perhaps they just don't feel any pressure. It may be cost a little bit more. It might be that the solutions haven't been made available in package form that's easy for corporations to use. So I think there'll be a learning process and a commoditization process of these privacy-preserving technologies and perhaps regulation might push that along a little bit. Professor Brooks, have you ever felt kind of scary or spooky? Probably many people in the audience have. Often when I'm going somewhere, ahead of time on my home computer, I'll go to Google Maps and look up where it is on Google Maps. And then I get off the plane and I'm in the rental car and I bring up the Google app here and I start to type in where I want to go. And I put the first digit of the address and it autocompletes exactly where I want to go because it's correlated my actions at home on my desktop machine with me traveling around. Now, first I found that very spooky and scary, but I must admit that I found it convenient. So I got sucked into this loss of privacy. So I'm confused by these issues, to be honest. Mr. Goldblum, how do you feel about it? I've got the strong suspicion that I'm the outlier on the panel. So my sense is that people of the Google generation, I used that term earlier, are less sensitive to privacy than people of the generations above. So my grandmother, for instance, as a Holocaust survivor, spent five years trying to hide her identity while she survived the Holocaust and is horrified by the idea that I freely put things on Facebook and other social media. I feel like I trade my privacy by using things like Gmail in order to get the convenience of the Gmail service. And I, for one, am delighted. I would be delighted if Google are auto completing my directions or if they're telling me that I can get a scarf, a 50% off. So in general, privacy is not something that worries me. I also think that there is a social good. I think that transparency, people often behave better when they have the sense that their actions are being watched. I know there's the quote by Louis Brandeis at Sunlight, it's the best disinfectant. I do think there are some situations where we do want to be careful and where we do need to regulate. And I agree that the health insurance one is one example. So in general, I think the default position should be let it all be out there and regulate the handful of situations that are problematic. If I could address the social media situation, because this is the world economic forum, so I think we should look at this not just from the perspective of nice, safe rights respecting Western countries, but also the rest of the world. And we're not really choosing to use Google or not. I mean, there is some version of Google in every country, but really relatively little choice. If you want to be in the modern world, you've got to use this technology. And if in doing so, it makes it easier for your government to monitor every aspect of your life, to figure out who the dissidents are, to arrest them, to censor them. There are problems here, and so it's not all wonderful and good. And I think what's needed, we don't want to get rid of the technology, but we need to be much more attentive to some of the rights problems that arise and try to protect these at the same time as the technology evolves. Mr. Nakanishi, the corporations have been said that they're lazy. What do you want to do? But now, from our business or some of the social issues, there are so many issues to be solved through the big data analysis and how to make a more appropriate solution. At the same time, it's a gold mine. You want to know where people take the train, where they buy a drink and when they exit. And there's so much business opportunities. Yeah, that observing of the people's flow can be utilized and to making a more clear public transportation system for each city. That's saving of the energy and how to making a life convenient and also the more comfortable life itself. Those are the last opportunities for us to be utilized through the digital technology. That's our basic stance. And you have to be aware of the negative consequences as well. Mr. Professor Russell, what is the area of direction of the research that can help maximize the benefits of artificial intelligence? Where should be the priority of research be heading towards if it wants to maximize the benefits for the society? So I think up to now, most fields, both AI, economics, statistics, operations, research, they've all had the concept of making optimal decisions. And they pretty much have the same definition. But where the utility function comes from, the objective that's being optimized, has always been treated as an exogenous variable. That means that someone else is going to tell me what the objective function is supposed to be. And the science of statistics and operations research is about optimizing once that's given to you. But if you think about a human being, we expect human beings to make good decisions, but we also expect human beings to understand what the objective function is. So if I ask a human being, you know, can we have some more paper clips? The human being knows that that means, you know, a few paper clips, maybe a box of 50. It doesn't mean, you know, covering the entire planet to six feet deep in paper clips. Because the human being has the rest of the objective function that's shared among human beings about what matters and what's good and bad. And at the moment, that topic is very much understudied in artificial intelligence. And so my feeling is that this will be a very important area. And I've been using the word value alignment, which means how do we get machines to have their value systems aligned with those of human beings? So that as their ability to make higher and higher quality decisions increases and their ability to look further ahead in time and space to have a more dramatic effect on the world, if their values aren't aligned with those of human beings, then you have bigger and bigger problems. So if we can solve that technical problem of getting machines to understand what human values are, the entire spectrum, everything that we care about and all the trade-offs and so on, that the humans just understand naturally, then I think that will go a long way towards making AI more beneficial. And in a very practical sense, if you think about a domestic robot who might be doing some of the tasks that a person might do, for example, the laundry, the cooking, the cleaning, perhaps babysitting the kids until the parents get home, there's a lot of common sense decisions that have to be made that involve trade-offs every day, hundreds of these decisions. For example, there's some food in the fridge. The children are hungry, but the food's been in the fridge maybe a little bit too long. Do we give the food to the kids or do we go and spend more money at the store and buy some more food and come back and do it? The robot has to know that the cat is an important value to the family as a pet and is not a food item to be used if there is no more food in the fridge. So you wouldn't want to buy a domestic robot because if you imagine people start selling domestic robots that can act. I mean companion robots are one thing, but taking real actions in the home, the newspapers would be all over the first instances of doing something that was really stupid and really unpleasant for the family. And people would stop buying that because if it makes that kind of mistake, putting the cat in the oven, who knows what else it could do. You lose faith that the robot has common sense. It's someone who's put 12 million robots in people's homes. I agree it's a nice academic exercise. I think we're so far from having the capabilities that a robot could manipulate the fridge even, let alone the cat and the pair of the cat. These sorts of examples, they sound fun, but they're so far from reality that I don't think we need to worry about those yet. And when the time comes that we do need to worry about them, I think the issues are way different than we imagine they are now. They'll be complicated, but I don't think we have the tools to understand what they'll be like because the world just doesn't change by plopping. It's like a Hollywood movie. A Hollywood movie always plops a new technology or a new thing in existing society, but existing society has always changed before you get to that. So although it's great for writing papers, Stuart, I don't think that these sorts of issues are going to confront us anytime soon. I disagree with you. I'm not saying we shouldn't have regulations. We absolutely should have regulations. We absolutely should have regulations. We always have and we always will have regulations. But I think it's too easy to go off into esoteric things that push into an area that then no longer makes sense. Sorry. Actually, I'm not sorry. I think you're wrong. So Rod and I have a long history of disagreeing. More than 20 years old. His most popular textbook in AI says that my techniques have not had any practical value. That's what it says. No, no, no. You want to keep going? We only have that. If I said that, Rod, I apologize. It was youthful exuberance. So I do think, actually, that these capabilities are maybe a little closer than one might think. So in our lab, for example, Peter Abiel, my colleague, has shown the sort of complete laundry cycle. So a robot that's able to pick up the big bin of laundry, to go through each of the items, to sort them according to the kind of wash they need, put them in the washing machine, run the washing machine, take them out of the washing machine, sort them again, fold them up into towels, socks, shirts. And the one thing it hasn't figured out is what happens to that missing sock? Where does it go? And I think in the computer science we have this notion of undecidable problems, and this seems to be one of them. And other areas. There are robots that can download, that can find a recipe on the web, read the recipe, and actually cook the dish. The difference is these are not practical systems by a long shot, and they will not be for a long, long time. So there's tremendous possibilities. Can I say something about, like, there's tremendous business opportunities, you know, many, many things that could be explored in the field. But five to ten years ahead, where's the technology going to be yet? I've heard that there are computers that can recognize photographs and classify them pretty correctly, probably at the 90 percent, correct? Over 90 percent. And people did not imagine robots could do that five years ago. So, in the next five or ten years, what technologies do you think will be surprising us? Anybody want to give a chance? I think we will continue to see the impact of deep learning over the next five years in perceptual tasks, as it's already improved speech understanding. It will continue to improve image understanding. But I think that there are some other problems which we would think would come with that, such as manual dexterity, which I don't expect manual dexterity to get particularly better in the next five years. So we're going to get narrow improvements. It's a little hard to predict. As you said five years ago, I don't think anyone in AI would have believed how good the image classification systems are today. So it's very hard to predict which ones, unless maybe you're the Google generation and then you have intuition, I think it's very hard to predict exactly which ones they're going to be. There will be a whole bunch of them and there will be a whole bunch whether it is no improvement in the next five or ten years. And maybe it's going to be 20 or 30 or 40 years as the back propagation algorithm really took 30 years to mature. Anybody want to give a prediction five or ten years ahead? No. One thing is that, not simply of the technology issue, that currently the various issues of the global problems cannot be solved in a single solution. So many of the items are complicatedly networked and some of the climate change, energy management is some of the pollutions and waters. Many of the subjects are really integrated to the one solution. The artificial intelligence, that's what is appropriate or not, I'm not sure, but the digital society cannot solve the whole things which cannot be solved in other ways past. So that's kind of the future view. So that how to integrate of the various analytics and how to make a more clear solution through the digital technology. That's our view for the futures. So I think that we have an enormous number of developments in the last ten years and I think we have a lot of pent up capability that hasn't made its way into society yet. I gave one example earlier of the ability to grade high school essays using algorithms. That is a long, long, long way from being widely deployed inside schools. And so I actually think the biggest change we're going to see in the next five to ten years is some of the capability that has been developed in the labs and then on the cutting edge, in more cutting edge companies, really starting to get much wider adoption. Mr. Roth? If I could talk maybe about where the regulatory efforts are heading, because for some of these technologies you're much better off keeping the genie in the bottle. Once the genie is out it's very hard to push it back in. And think about nuclear weapons. Wouldn't we have been better off if that technology was not allowed to be deployed and we weren't stuck trying to regulate it after the fact where the world is now full of nuclear weapons? There was a positive example with so-called blinding lasers and the technologists came in and said, isn't this great? We can incapacitate a soldier on the battlefield without killing him. And people were actually appalled by the idea that these lasers could just around and blind people from afar. And so there was actually a treaty quickly adopted that prevented that technology from even being developed. We're now trying to do something similar with killer robots, with fully autonomous weapons. And actually on the other side of Switzerland there is something called the, a meeting under something called the Convention on Conventional Weapons, which is trying to develop a treaty that would ban the development or deployment of fully autonomous weapons. There's a similar effort to develop what does the right to privacy mean in this increasingly digitized world where governments view us as not having any real privacy because we share the information with some company. So there are regulatory efforts underway, but they need to be actively promoted. There's big pushback from industry and from governments. I know Professor Bruce, you wanted to, oh okay, we have five more minutes and I want to touch up on the employment situation as well. I just wanted to say we have to be careful. I agree with the, you know, blinding lasers is bad and that was a good outcome. But we shouldn't therefore say we shouldn't have high-powered lasers because there's plenty of other applications for high-powered lasers. And sometimes I think in the popular press at least it gets confused between the real issues and what should be banned and what shouldn't be banned. I agree. I mean much of this technology is good. It's the weaponization of it that becomes a problem. I think one of the biggest concerns that people have is the artificial intelligence may raw, may bring unemployment, massive unemployment. What are the, what do you think about the effects on employment? Mr. Goldblum, do you have any sense of what may happen? So in almost every circumstance when it comes to technology I find myself on the optimistic side of the fence, except in this one. I actually think that what, there has been a remarkable amount of development in machine learning capability in the last 10 to 15 years. And I look at some of the things that we're seeing on the cutting edge of what machine learning is able to achieve. And I see the potential for really large, large numbers of professions to start to become obsolete. And I think that historically society has dealt very well with that through periods like the Industrial Revolution. My concern this time round is that the rate of change is so fast that the ability to adapt, the ability to adapt is, the fact that we've adapted in the past to progress doesn't necessarily mean that we should be optimistic, that we're not going to have massive unemployment as a result of some of the remarkable technologies currently being developed. Professor Russell, the productivity is going to increase with the incorporation of robots and AI technology, but at the same time people worry about inequality rising. And even just yesterday at Davos I heard that maybe 65% or 75% of the kinds of jobs may disappear. Is that really? So I've heard those figures that last night I attended a dinner with several senior economists, Nobel Prize winners. The dinner was supposed to be about the new internet economy, but the economists really only wanted to talk about the impact of robotics and AI on jobs in the next few decades. And I think I see two sides of the issue. One is that the notion that normal work means standing in front of a machine and doing the same thing 18,000 times a day every day of your life. That's a fairly new notion. That's really the last 150 years or so. And 500 years ago, someone writing science fiction, this might have been their version of catastrophe of their dystopian future. And now we're worried that it's going to go away. So that gives you pause. Do we need to continue with a system where most people in the world are treated as machines? And I think the answer is no. The question is what do we replace it with? How do we engineer a transition to a society where people are actually doing things that people enjoy doing and are beneficial to others? I read to my children most nights, I enjoy doing that, they enjoy it, but it's not an economic exchange. So people can provide value to each other and the question is how do we make an economy out of that? How do people gain wealth and self-respect and a feeling of a position in society when the traditional notion of a job may be vanishing? Lastly, I'd like to ask each of the panelists how, whether or not the future will be a brave new world or not. Can I start with Mr. Goldblum? I can't just say yes, can I? As somebody living in the Bay Area, I said to my wife before we moved to the Bay Area from Australia, in the BC it was Athens. During the AD era it was Rome, during the Renaissance it was Florence and today it's San Francisco. And I think a lot of our life in San Francisco is incredibly convenient, whether it be Uber or having groceries delivered or you name it. And I think to the extent that that holds in San Francisco sort of leads where the rest of the world is going, I think it's a very, we have a very exciting future in front of us. Okay, Professor Brooks? I'm of two minds here. My robot Baxter up on the top right is often accused of taking away manufacturing jobs, but in the US the average age of a manufacturing worker is 55. People don't want those jobs in the US and in the third party logistics, the jobs are being done by undocumented aliens and the country is pushing them out, there's no one to do those tasks. At the same time, I am very worried about the inequality that's happening and so how do we find ways to make technology be something that can benefit people to have satisfying jobs and income. And lastly, I want to say that with all technologies that comes disruption, 100 years or so ago farming became mechanized. It didn't lead to massive unemployment amongst humans. The number of people working on farms went way down. What it did lead to, however, was a decimation of the horse population. And let's hope that the humans aren't the horses this time around. Mr Nakanishi? I myself is very optimistic in the future and also that we cannot stop the trend of digitalization and also that from the viewpoint of the employment in our case, Hitachi has more than 200,000 employees in Japan, Japan only. But now there used to be 80% of the other employees are blue-collar workers, but right now only 20% are blue-collar, the others are desk workers, so that we have to adjust our own lifestyles to digital world. That's a very crucial issue, but still I myself is very optimistic. Mr Roth? For me, the difference between a world that's much better because of technology and a brave new world is a question of complacency. It's very easy for us to sit back and say, oh, things are getting better, let's just live on. And that can be problematic. We've seen in the realm of surveillance how capacities have far outstripped the ability to regulate them to keep our privacy in check. So we're now quickly trying to catch up. We need to try to avoid a similar problem in the area of weaponry. We don't yet have killer robots deployed, so here the aim is to stay ahead of the technology and develop the laws before the technology gets ahead of us. And lastly, Professor Russell? So the title Brave New World is sarcastic. The book is written from the point of view of someone who's visiting a world that thinks of itself as a brave new world. But he finds it utterly repellent. And it doesn't say how did the world get into that state, but presumably it got into that state by a series of small changes, each of which seemed desirable at the time. And what was lacking, presumably, in that process was foresight of thinking about the direction of society and over a long period and saying, if we keep going in that direction, we're going to get to a state that would be hard to get out of and we won't necessarily want to be there. So I think organizations like the World Economic Forum are really devoted to doing that thinking, to understanding the possible futures and steering. We have to get out of the mindset that all we do is forecast. We have to steer. And that's what I would like to say. Thank you. Okay. That's it for this session. Thank you very much all for joining us.