 So hola, ¿qué tal? It's really nice to be here. I'm going to speak in English, unfortunately. My Spanish is not so good. I can order drinks and things, but that's a good start. So get a headset if you're having trouble with the understanding. So I'm here to talk about humanity and technology. It's interesting, you know, when I started speaking about this five years ago, we started talking about digital ethics. Nobody understood what I was trying to say, because, you know, digital ethics was like, how do you use the Internet? You know, do you use too much of the mobile phone or gender bias or so? But today it's funny, is today we're talking about the future of humanity, the future of us. Because technology is now at the point to where we can say in 10 years it will be unlimited in what it can do. I mean, today what we're seeing with the smartphone and the possible that that will be nothing compared to 10 years when we have augmented and virtual reality available to us and most of us will work in a virtual reality space and very soon 15 years when we can connect directly to the Internet. This is not science fiction. I mean, 10 years ago you thought about science fiction being WhatsApp. You know, free phone calls and that was science fiction. I mean, it's mind-blowing the things that we're seeing around us. This is the main topic. Humanity and technology are converging in technical terms. In spiritual terms, yeah, that's a different debate. I'm not religious, but this is an interesting topic, you know, when you think about who are humans and what are machines, what's the difference? I mean, clearly you can see this is already our second brain, our external brain. This is where we keep things. It's not too far-fetched. You can say in 10 years this is our brain and we cannot get up in the morning without connecting to the brain. That would be a very sad moment because when you think about really what's happening here in our world is that it's going to change more than 20 years than the previous 300 years and I think 90% of that is good. We can defeat cancer, not 20 years, maybe 30 years. We have unlimited food, unlimited energy, cheap healthcare, global... I mean, that's amazing, that's like Nirvana. I mean, today Richard Branson has invested last year in a company that makes artificial meat in the lab. It's actually a real meat, but it's grown in the lab and right now it's 2,000 euros a pound. But if this works, we can feed everyone. I mean, if we distribute, you know, if the price comes down to 10 cents. But these are all the trends, virtual reality, big data, artificial intelligence. It's enough to make your head spin. Every day there's a new innovation. Every day there's somebody else launching a new thing that is the next big technology and here's the bottom line, it could be heaven or it could be hell. But technology doesn't care what it is. Technology is machines, it's just about efficiency, making things work. Technology does not have any moral code or it just makes things work and if the mission is to make paper clips, it will make paper clips from us, if it can. I mean, when you go in the forest walking and you're stepping on 5,000 ants, you kill 5,000 ants, you don't even notice. I mean, there's just ants, there's plenty of ants. So maybe in the future technology will step on us by accident. This is a very big discussion. We have to think about who we want to be. If we want heaven, I think we can have heaven using technology. But we're going to have to talk about what that means. And then we're talking about politics, culture, social contract, ethics, economies, economics. We're not just talking about money. If we were to use every technology that we have to make money, we can probably make 500 trillion dollars connecting our brain to the Internet. But do we really want that? Do we want to live in a world where longevity is a standard so we get to live to be 120 and some people get to live forever using technology? How far away is that? 50 years? But imagine if living to be 120 is the new normal if you have money. That strikes me as a prime reason for terrorism. All the rich people can live to be 120 and we just die with 60 because we don't have the same means. It could be heaven, we could become as God. Sorry if you are religious. Superhuman. Or it could be hell in a world where everything is connected to a giant algorithm. Your insurance, your healthcare, your media, your content. Look at what happened to social media. This is social media. Media has become perverted to be an algorithm. Why do you think in the last 10 years we had this huge wave towards right-wing thinking in Europe? The polarization of media. I mean, why do we have this? The answer is probably social media. And before all of that we thought, okay, it's a great tool. I love social media. But we're going to have to go a little bit further than just making it available. I mean, clearly this topic, we can't go backwards on technology. If we're going to invent how to fight diabetes, Alzheimer's or cancer using technology, if we can only save one person, that's enough of an argument. We have to do that. But if we invent these technologies, we also have to make sure they are both safe, equally distributed, available to everyone. And also there's an ethical purpose of what it's supposed to be doing because the same technology that can fight cancer can build a super soldier. And God knows we don't want super soldiers. We don't want any soldiers, but we don't want super soldiers. I mean, this talking technology has no ethics. So Tim Cook, the CEO of Apple, spoke in Brussels a couple of days ago about digital ethics and data protection. And he, considering that he's the CEO of the richest company in the world, pretty much, this is a quote. We shouldn't sugarcoat the consequences. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them. Now, what companies is he talking about? Of course, not his own, really. It's interesting. Apple doesn't really do that very much. But what we have today in media and communications, to a large degree, is surveillance. In many ways, it's also the opposite. It's freedom, right? But look at Turkey. The same technology that was used for Turkish people to express themselves is now being used to put them in jail. So technology has both angles. This is obviously a big discussion at Telefonika, where we are right now. And by the way, thanks for Telefonika Foundation and the forward to Kultura for inviting me. The idea of the new digital bill of rights and new social contract SAP has an AI ethics proposal. This is a software company. It talks about AI ethics. This is a huge topic. People talking about new laws that we need. And the biggest thing is that we should not let technology firms decide what our future is. I mean, look around today. I mean, I work for lots of these technology companies as a speaker. So I see it firsthand. Who decides the conversation on the future? It isn't us. It's not the government. It is not the people. It is not the political parties. It is the tech companies. The tech companies tell the story of the future. And they tell it really nicely. Leader being IBM, of course. We should not let the story of our future be told by companies that make hardware and software. This is a human story. It's not a sales, not a marketing pitch. Look at the discussion about AI and ethics exploding. Just a couple of days ago, Tim Berners-Lee published a new contract for the web. This is the Worldwide Web Foundation at the Web Summit in Barcelona. This is the hottest topic of the year. Gardner says digital ethics is the number one topic for 2019. It's funny. Gardner is a completely a tech company organization. He talks about techy things like virtual reality and cloud computing. And now they're saying ethics is the biggest topic in tech. And I mean this positively. It's not like we're going to spoil the party or something. I mean ultimately what is the point of having technology if we can't be human? I mean that would defeat itself. How would you have happy consumers if we're robots? I mean the whole system kind of perverts itself. So digital ethics really is the difference between whatever is possible in technology terms and what is a good thing to do for humans that provides happiness. Well what is happiness? That will take another five hours to discuss. But societal flourishing. Is this good for us? Maybe the mobile phone is good for us. Would it be good for us to be controlled by an AI? I doubt it. Where do we stop? We're living in a world of exponential change. And the problem is that technology is exponential. This is after all just machines. And we are not. You can say as much as you want even if you're 15 years old. You can melt the tiles better than I can. That is true. But you're not going to be exponentially powerful. Machines will. We're going to have quantum computing. That's already here. In 10 years we have a machine with the combined computing power of all human brains in the world. That's a machine with an IK of a trillion. It could be extremely useful if we can use it right. So we're at the perfect point of this. But our ethics have not kept up with our technology. Einstein has already talked about this in the 60s. But now it's finally true. So we need to make sure that we use technology in a way that will actually serve its purpose. Which is to help us be better humans. Not to help itself make more money. This is otherwise called as the Facebook Dilemma. Facebook was a place where we had a purpose. Now Facebook has become its own purpose. Facebook is a place where we are the content of it to generate billions and trillions of dollars from our data. And in return, addicting us. Data is the new oil. And AI is the new electricity, artificial intelligence. This is what all the tech companies are pursuing. Data, intelligence, cloud computing. I mean, we're in the building of a telephone or a phone company, right? A company that provides connectivity. That's part of this whole empire. We're talking about a 50 trillion dollar gold rush here. I mean, every single company in the world wants to be part of this. I mean, look at who's actually leading the gold rush. This giant vortex of connectivity. Look at these companies. Top four companies on this list. They have grown tremendously. They have more money than the GDP of France. They have tripled in value in four years. The richest companies in the world are paid the least taxes. They're the least organized about the side effects of technology. They make 40 times as much money per employee as a traditional company. The average salary at Facebook is $245,000. Think about that for a second. How much money do you make per year in Indonesia? $5,000. Yet every Indonesian is using Facebook. Look at the Internet of Things. $14.4 trillion. AI, $15.7 trillion. So do you really think that these companies will go slow on developing things that could have a harmful effectiveness? That's like saying to the oil companies, check out the oil because you know the CO2. Same problem. We're going to have to think a little bit further because now these machines can actually think. They can hear. They can speak. They can learn. Now, before you get too worried, when we think about these human terms, thinking here, learning, speaking, that's not what machines do at all. Machines don't think like humans for at least another 50 years. So it's not like that. But enough if they have machine thinking. That's already plenty because now we have artificial intelligence becoming pretty much active in every place of society. Look what happened here. Language translation, intelligent assistance, Alexa and so on. Call centers, debating, poker, ghost, speech recognition. Basically computing is coming all the way rising to the top. And all of us use artificial intelligence every single day with email, Google Maps, Tinder. Now, you don't use Tinder. I know that. We'll be just some other people somewhere. But basically what we have here, we have a cut-off, right? That's going to be difficult for machines to do things that takes a human ingenuity. And that's our hope for the future. Will that bar rise? Yes, it will. How far will it rise? We don't know, but here's the biggest problem. We don't want a machine that has what's called AGI, artificial general intelligence. That's what we have. We are generally intelligent. We don't want a machine to have this. A machine with an IQ of a trillion and general intelligence would be the end of human civilization. Not by purpose, just by accident. Like stepping on an ant, petting a horse. And we'd be the horses of the digital age. So intelligent assistance, AI, that's fine. But an arms race on smart machines? Very bad idea. And we are already there. Putin already says that that's what he wants. Of course he says many things, but he says that he wants to make sure that Russia is number one. China wants to make sure China is number one. I don't know about Spain, but we'll see about that. But basically this is a huge conversation because we're going to what's called an intelligence explosion. A machine that can do anything. The other day I was surprised a week ago I saw a robot that can make a dental implant. A robot can be a dentist. This was an operation in Japan where the machine did the whole thing by itself. Not sure I would want to go there, but... So here's a key question. What happens to us? What happens to the things that make us human? I think we're going to need a moratorium on artificial general intelligence. Not on AI or fancy software, smart machines. I think that is disruptive, but probably good. But in a machine that's like us, I think that's a bad idea because we're building a new meta-intelligence. We're connecting everything. This is what phone companies do. They connect everything. Now we're going to have a trillion connected devices. We're going to build a new nervous system. That's really what we're building. A brain. In fact, Google calls it the brain. It's like five or six. There's even a European Union brain project. So then we have all these minor questions. Security, privacy, rights, sustainability. I call these minor questions because when you have conversations with companies about the future, these are questions that matter after you have a couple of drinks. As Bartolt Brecht liked to say, dinner first, then morals. We cannot sustain this. I tell you in ten years we're going to be at a place where this is a reality. And then we're going to say, well, that's really bad because now I can no longer get insurance because the system knows that I smoke. You shouldn't smoke anyway, but of course. So Tim Cook again said technology can do great things, but it does not want to do great things. That's because it doesn't want anything. If technology is going to do great things, then we have to make it do great things. We have to have the right government, the right politics. Every politician needs a driver's license for the future, a test. I propose that we test every single mayor, public, official, whoever they are all the way up to the prime minister and the king, all the way up to say, do you have it? Are you aware of the future? Can you decide what it takes? Because now we're going into the world that will no longer be about this whole debate if we can do something that's today. The new debate is why we're doing something. And this is the question that Telefonica is trying to answer with the social contract debate and the digital ethics. It's no longer about technical feasibility. We're getting to a world that is frankly unsustainable. Social media is built on addiction. Social media companies have hundreds of neuroscientists trying to figure out how to keep you there. Like cigarettes companies used to do. They create filter bubbles that make trillions for them but kill every other medium. And they're ultimately a danger to democracy. If social media companies want to help us build democracy, let them hire journalists. Make them pay for 30,000 journalists in Europe to work and get paid by Facebook but stay where they are. And then we're going to the future where we basically have the same problem that we have with green energy. Nobody cared about the externalities of oil and gas or nuclear until we have the stuff that we have today. And that is 485 ppm. Climate change, warming, all the stuff we're going to have to eat in the next 20 years. That is everywhere. Now the same thing is going to happen with technology. The externalities of technology, surveillance, loss of privacy, machine thinking. We have to address them. Otherwise we'll be in the same place and we are today with the climate. In 10 years we're going to sit back and say we have like, you know, whatever, 1,000 ppm so to speak of digital pollution. We need to think about that as a way of where it's going. So I have proposed this digital ethics council as a way of putting that into place an organization of 50, maybe 60 people around the world who get paid very nicely to do the job of thinking about what we actually want. You would be surprised how much we agree what we want even between China, Russia, India, US and Europe. There's quite a bit agreement of what we want to be primarily that most of us, 99% of us, want to be human. There is a 1% that wants to be more than human. And we'll find that out in Burgos tomorrow, right? So I'm going to wrap up by saying basically as our lives move into the cloud, this is the key question. How do we put the human back inside? And as the next speaker will tell you this involves silence, boredom, mystery, secrets, things that make us say we have to protect them. We have to create an EPA for humanity, a protection agency for humanity. Because as much as I love technology and the money that comes with it, obviously everybody loves that, the convenience, what does it do if we can't be human? What's the point? It does not actually, you know, the biggest danger today is not that we're going to be killed by robots. That is a danger in 50 years. The biggest danger is this, we become too much like them. We get lazy, we stop thinking, we leave the work to them. And that also impacts our education. We have to think about that, we have to really talk about not just the STEM education but the combination of the two. I call it HECI, Humanity, Ethics, Creativity. Why do we stop teaching kids music, or art, or sports, or philosophy? That is stupid. This is the only thing that makes us human, we take it away. We teach them how to be a machine. If you teach your kids to be a machine, they will never have a job in their lives. Because machines are no longer stupid. It's very important that we put this on equal footing. 50, 50. The science and everything else, because here's what's happening, we have the same cards we had for thousands of years, but these are the cards of tech. A new card every week. We must invest as much in humanity as we do in technology. That's easier said than done. I realize, we'll talk about what that means shortly. So I summarize by saying this is the inevitable future. Basically technology is absolutely everywhere. Big data, cloud computing, the Internet of Things, artificial intelligence, quantum computing, virtual reality, the blockchain. I mean, that could go on forever. The list is basically endless. And we're talking about hundreds of trillions of dollars, but on top of all that stuff, we have to think about how we put humanity on top of technology. Because the purpose of technology is to make us happy, not to make itself happy. And this is what we are. So very important, the future has awesome humans on top of amazing technology. And I think for that we have to collaborate to make that work. Think about how we will get there and how we're going to affect it. Thanks very much for listening.