 Dear Prime Minister, dear Mayor of Moscow and everybody else, ladies and gentlemen, it's a great pleasure to be here. I don't know if you know what a futureless does, but I don't do what a usual futureless does, which is to predict things. I observe things, much like you could do if you had the time to do so, and I try to work backwards into a new reality of what we can do in the near future. I think most of us now, we have to realize one thing, is that we're living in a world that's based on exponential technology. It's very hard for humans to understand exponential, because we are not. We are linear. We don't live faster because we Twitter. We don't have more real friends because we have Facebook. We are not technology. But technology is now exploding in a sense that all of a sudden Moore's law is helping us to understand that the next five years will give us things that look like science fiction, or have given us things already that look like science fiction. When you count exponentially seven times, you get to 128, not to seven. And today we are at four, the next step is eight. So basically what's happening here is that we have a convergence of man and machine, humanity and technology. In many ways it's extremely scary, in other ways it's extremely positive, but humanity will change more in the next 20 years than in the previous 300. And that's easy to say when you're looking at things like book printing or the steam engine, right, it changed a lot. But what we have here today is things happening in the next 20 years will fundamentally redefine what it means to be human. It was already redefined by mobile devices, by the internet. Now we're talking about things that will be the future in the most impossible way that we have perceived until today, where everything becomes a yes answer. Take things like automatic language translation. Skype already has this integrated into the software. Take things like genetic engineering or going into the bloodstream to fix my cholesterol. Take brain computer interfaces, virtual reality, augmented reality, bionic arms and limbs, self-driving cars, and of course the biggest one, genetic engineering. Talk about cancer. If you have cancer, or your friend has cancer, of course it would be fantastic if we could fix this, right? I mean think about that reality. But the same technology can be used to design our babies, right? So if you're very rich, you can then go out and have a different kind of baby because you can afford it. This kind of technology will be ready in the next 10 to 15 years. And it's safe to say that technology is progressing at mind-boggling warp speed. But we and our laws and our societies have remained linear. So it's interesting to see that basically as many people have said, the power of technology has surpassed the scope of our ethics. Sometimes we don't even know if it's our mobile device that's telling us to do something or if it's us. We don't remember the phone number of our spouses because it's based in here, right? We go on a date based on what the device says. I mean the things that are happening here already as powerful intermediaries emerged, then the question really is what are their ethics? I'm not talking about religion here. I'm talking about values, beliefs. As the Dalai Lama has said weeks ago, everybody has ethics and some people have religion, right? There's a big difference. We all have belief into what the future should be and what we would like it to be. And now we have artificial intelligence coming and saying that they can basically simulate and emulate our humanness into cognitive computing. The biggest initiative of IBM is cognitive computing and the neuromorphic chip, the chip that acts like a brain. And then we have intelligence amplification. For example, Siri, Cortana, and our mobile phones, allowing us to get smarter by using technology, that will be a very big growth area in the next five years. And we've seen that all around us. Artificial intelligence, alarm bell rings, think about the self-driving car. If you can use the self-driving car to sit in a traffic jam and the car will take over, so you can read your emails or eat a sandwich, that's a good thing. But if the car takes over when you're going into the freeway of 150 miles an hour and decides who's going to live or die, that's an entirely different thing. Do we want machines to decide what our future holds? William Gibson, the science fiction author, once said, technologies are more neutral until we apply them. Everybody in this room that's creating technology is also impacting ethics, society, organization, politics, of course, jobs, employment. We cannot think of technology as being morally neutral when we roll it out. We have to take it one step further. Looking at all the things that are happening around us, Google now, Facebook, MoneyPenny, Siri and Cortana, basically intelligent digital assistants. If you're in a search business, for example, you know that in three to four years, we will not search the internet anymore as we do today by typing in best sushi in Moscow, right? Our intelligent agent will already know where we are going to go to, what kind of budget we have, and we'll have already made the reservation. We are actually going inside of technology. Those devices are us, a digital copy of us. So the next step here, basically, thinking about what that means is devices like Jibo, and there's 20 other ones. This device was invented two years ago. The founder of Jibo says that this is not a robot, it's a friend. I mean, we're already at that point where there's over a dozen devices that act like our friends as a machine. In Japan, we have robot pets, of course, that's already pretty common, right? But imagine a device that knows who you are, right? Think about what that would mean, social robots, machines as partners. You don't have to have too much imagination to think about what the next step would be if you're an entrepreneur. But social robots, I mean, this is basically, the question is, should technology go inside of us? I mean, I have my mobile phone here, I may have it on my glasses. Maybe I have a visor, a hologram, an Oculus Rift, yeah? But technology going inside of me like a plug-in, right? Like people use if they have disabled hearing. Should we become technology? And if I ask people around the world this, I hear the answer many times, people are saying, well, if I can become a superhuman based on technology, I definitely want that. Mostly 15 year olds do that, right? But think about that for a second. How far would you take it from the mainframe to the mobile to an implant? If you're a futurist or a professor or anybody else, would you like a Wikipedia implant? You can hold quicker speeches, not impossible. Think about where that's going to take us in the near future. Should it take us to this point where we are saying, well, looking at ethics, this is a short definition of ethics. We clearly have a problem here because technology does not have ethics. What does a software, an intelligent assistant, a robot, know about ethics? It's impossible and we don't want them to know anything about ethics, right? But a self-driving car has to make an ethical decision when there's an accident that we do in a split second. Think about what that means with things like the singularity that Ray Kurzweil talks about, the convergence of man and machine. 2025 will have the first computer that is going to be as powerful as the human brain in capacity. In 2050, we'll have one computer have the capacity of all of the brains in the world. All of the brains. We're talking capacity, computing capacity, right? Not emotional capacity, hopefully. But basically, that is a huge shift for us that we have to think about. So I'm saying that maybe there should be some things that should not be automated. Here's an example, Mattel Industries is a toy maker, launched a new Barbie doll three weeks ago. This Barbie doll is connected to the Internet. It will actually listen to what the child is saying and will give a response in real time that is in the cloud. It will not be pre-recorded. Check it out, a short video. Do we have sound? We don't have sound. Well, I'll explain to you what it is, but I guess we didn't set that up beforehand. But basically, this doll, you can watch it on YouTube if you just put in hello Barbie, okay? You can find out that this doll connects to the web and talks to your child as if it was a person. Now, imagine what your child will learn by talking to this doll, right? The doll says, I love you, you're my friend, it's a great day, and so on, right? It will give predictable answers. Your child is going to learn how simple people interact. Be very disappointed when it comes to real life. We're not everybody is a friend. Some things should not be connected. There's a trend among technology to reduce everything down to an algorithm. Everything can be made into an app. There's 27 apps to get divorced on the internet, right? Maybe it's late for you, it's late for me, but this kind of reductionism and machine thinking is a real problem, right? Because we're ignoring the context of human reality. So we have to think about this. There is not an app for everything. There's not going to be a thing that regulates our data in such a way that we would do it. Now, that is not to say that we're not going to use technology to make things more efficient. We are going to do it, as Jeremy said earlier, there's a potential for us to really solve climate change, energy, food using technology. But technology will not solve social or political problems, right? I mean, it will be wrong if we look to Facebook to be our government, right? As some people have suggested. What's happening here in technology is quite clear. We're moving from using mobile devices on the internet to using robots and intelligent digital assistants to automate processes. TechCrunch, the host moderator works with TechCrunch, was a very important article about two weeks ago about how digital assistants are taken over jobs. Very soon, your executive assistant will be in the cloud. Already is, using Siri or Cortana. You may have heard about transhumanism, the idea that people become technology to become more fitting for technology. I believe that this would be a very bad idea. Because it would reduce us to become machines. We should use machines to be more human, not make us more like machines. This used to be the key question in technology when I first started doing internet stuff in the 90s, right? The question always was, how can we do this? Can it be done? You know what the question is today? Every time you ask that question, the answer is, it can probably be done, right? Yes, we can, yes, we can. You can do all of those things. The key question now is a different one, right? This is the key question. Why? Why are we doing this? Does it serve the collective benefit? Does it serve customer happiness? Does it serve society? I mean, this is a key question that is going to emerge there. Because, as my good friend Sophie said years ago in Greece, nothing vast enters the life of mortals without a curse. Technology can be as good as creating huge energy and profit going forward, but it can also create very, very large issues. To keep in mind that we have to actually look both sides of the fence here. This is why I would encourage you to think about a treaty on artificial intelligence. The most powerful weapon in the future of man is technology data information. Let's think about what happens if we don't regulate or look at regulating this. What would have happened if we hadn't regulated the oil industry? We need to think about what that means for our immediate future because everything that we do on this huge grid of technology will depend on our ethics as to what we want and what we want to go with this. Great article in MIT Technology Review. If you're looking at this graph, basically saying at one point the self-driving car has to decide is it going to kill the driver? Is it going to kill one person on the side of the road? Or is it going to kill other people ahead of me? Because somebody is going to get hurt. That creates a wicked problem. Do we want the car to decide this? Do we want autonomous weapons systems deciding who is a threat and who is not based on an algorithm? I doubt it. I think what we're looking at here is the idea of the unintended consequences. Are we actually going to be in charge of this? It's quite clear when we're looking at this that the new arms race is about data and about intelligence. That is a significant opportunity for those providing those things, of course, right? As most of things come from the military there. But we have to think about do we really want an arms race or do we want to gather the benefit of this? That is a very crucial question. Technology in the future is no longer going to be in this island. This is a 60-year-old paradigm from Paul Barron. Technology in the future is going to be converging humanity and organization. I would maintain that if you're looking at building innovation, you need to look at this in a holistic kind of way. This bar here on the left shows you how many people will be unemployed because of technology. Looking at the top of the period, loan officers, receptionists, paralegals, retail people, taxi drivers, security guards, cooks of fast food and so on. Automation will take a lot of jobs, create lots of new jobs, but the shifts will be huge. So we need a responsible, a holistic and a precautionary approach on this. We cannot just say, let's use technology because it exists. We have done that until now. It's worked out until now, but now it's getting much, much larger the stakes of what we can do or should not do here. We've heard lots of talk about education and why our kids should embrace STEM, you know, science, technology, engineering and math, and I'm all for that. At the same time, we also need to think about what I call HECI, humanity, ethics, creativity, intuition, things that only humans can do. The jobs of the future will be primarily in those two sectors, right? The STEM sectors and the humanity sectors. What I call the humor rhythms. So very important to look at this issue of how those two will be interfacing in the near future. Riffing off Pablo Picasso and I'll come to the end of my presentations who said that computers are for answers, humans are for questions. We should not stop asking questions because computers can provide the answers. We need both. As a great writer on this topic said, Donald Kahneman said, cognition, understanding stuff requires a body, right? It's a body you think with your body, not just with your brain. Can we get them on your back? Thank you, that's good. So it's actually about having a cognitive approach that uses both, technology and humanity. I think if you're looking at the future of innovation, the biggest opportunity is to provide both. Technology, imagination and human flourishing. I'll come to the end. I think it's very important to look at this and say, okay, what's happening here? I think with technology we can say, I would say 90% of that is very positive. 10% has the potential to create huge friction issues. We have to look in both directions. So I would appeal to you for your leadership. It requires foresight, precaution, balance and a sense of wholism. That's my final slide. Embrace technology, but don't become it. Thanks very much for listening.