 Hello, I'm Gerd Leonhard, futurist. My new book is called Technology vs. Humanity. Humanity will change more in the next 20 years than the previous 300 years. And I say this fully aware of things like the industrial evolution, of course the internet and space flights. But now what's happening is that technology is truly turning science fiction into science fact. We're achieving so many scientific breakthroughs in nanotechnology, in genome editing, in artificial intelligence. But it seems like all that stuff is happening at the same time now. And this is the takeoff point. And so it's quite clear now that technology can pretty much do anything that we had imagined. Man and machine are converging. Man and machine are basically creating a sort of symbiosis. And the question for me will be ultimately where are we going with this? Are we becoming like machines or machines becoming like us? Many things that we're seeing around us, they're very gradual. We don't see very much like see the paperless office or cloud computing. And then all of a sudden it goes boom and it just either dies or it explodes. It's exponential gradually then suddenly. So we shouldn't make a mistake and look at those things that we have today like artificial intelligence or genome editing as being that far away. We're literally on the way to an exponential explosion there. And that will redefine us as humans. It's quite clear that we're approaching the point of what's called the singularity where one computer has a capacity of the human brain. And roughly in 2050 many experts and other speakers are saying one computer will have the capacity of all human brains. And that may sound very scary to us and it probably could be. But the question really is how do we control technology at that point when a computer has an IQ of 50,000. But essentially one key driver of this development is artificial intelligence. This is the major thing in technology, machines that can think. And some people call that cognitive computing or even conversational computing. All the terms that we're seeing around this and deep learning and neural networking. Huge topic of course these days. That is a major driver of change. We may find ourselves in a situation where we don't really know the difference between an intelligent machine and an intelligent human in the same way that we do today. We may indeed be the last generation of humans that live without being altered. Without being augmented. And augmented is already of course using a mobile phone but this is a trivial thing compared to using visor side or augmented reality or transplants on your high-risk or cochlear implants for data, a Wikipedia implant. And this is all becoming possible by connecting our thinking to the Internet as some people are saying uploading our brains to the Internet or to computers is our future. The question really is why? And what is the purpose? Is the purpose human flourishing or is the purpose technology in itself? Does technology become the purpose of our lives? Automation is raging everywhere and probably eating 50 to 70% of our jobs even though there may be another 60 or 70% of jobs that we don't even know that are going to unfold. But automation is a big deal. What should we automate and what should we not automate? Responsible automation will deploy technology for humans not despite of humans. It's very important to realize that we cannot automate to be a human. We cannot automate happiness. We cannot automate satisfaction or contentment. Those are things that we need to create as humans and there's no wormhole for being happy. And we cannot when we should not employ technology to create fake happiness. In the book I talk about the need for a kind of protection agency for humans and I sometimes joke about this being the environmental protection agency, the EPA, for humanity. A place where we can say, you know, we can do this but maybe we shouldn't because we need to protect this. You know, it's a nature reserve, so to speak. But we also have to have some precaution. We shouldn't invent or do things that we cannot undo that have existential risk. We need to think about the non-proliferation of artificial intelligence or the proliferation in a controlled way. And the same goes for genome editing and for nanotechnology. Who is mission control for humanity? This is a very important question. We can't allow technology and by extension of course the military and investors and stock markets to control what we do with our humanity. What our ethics should be. What we are allowed to do in the future. That is a key question. Are you on Team Robot or are you on Team Human? We should embrace technology but not become it.