 It's a great pleasure to be with you today here in Pisa. The future is already here. We just haven't paid enough attention. Not totally, yes. This is technology, right? It's an amazing tool. Is this technology going to replace those two people back there? Not anytime soon. You can use this and in ten years it'll be infinitely better. And we can use it like a tool. We can speak through it. There's now a company developing a headset that I wear and I can speak and it comes out on the other side without using the phone. But will this really replace translators? How important are things are not being said? A translator would understand the meaning of what I'm trying to say, but I haven't actually said it. It knows the objective. Does a machine know the objective? I think it knows the job. So I think in 30 years maybe it's possible the machine does what the translator does. I'm here to talk to you about this topic, technology, amenity. I wrote a book three years ago. It's now out in Italian. I wrote the book because everywhere I speak people say, well, technology is great, but what's going to happen to people? Because technology is everywhere now. And of course technology, as you know, has impact on our jobs, on our ethics, on our politics. And these days a lot of people are worried about the future. I don't know about you, but in Europe many places I go to they say the future is bad. Because climate change, technology, robots, robots will take our jobs and then they will kill us. So this whole debate led me to write this book and I'm going to start right from the beginning by saying I think the future could be heaven or it could be hell. It could be heaven if we can use technology to solve major problems. Food, water, climate, energy. Technology will not solve social problems. It will make them worse. Do we have more equality because of technology? Some people would argue probably not. Do we have better democracy because of technology? Some would argue yes. Many would argue not. To make it heaven, not hell, we don't need more technology. We need more humans. We need more thinkers, more philosophers, more people who are interested in the future. The word renaissance, looking up the whole area around here, in the 1500s we talked about renaissance. Now I talk about renazimento, a new renaissance of humans that use technology but don't become technology. So this is a very important topic in my work. I think generally speaking we have two large issues. I'll touch on this today because it's a real big topic these days. Our first challenge is now climate change. You've noticed the last three months has exploded. We're going to see a lot of activity on this and that's not the topic of my speech so I'll just sort of give the framework here. Looking at these numbers, the entire southern part of the world in 50 years could be so hot that Europe will be looking at 200 million climate refugees. And that is Italy. As you know, the first instance of climate migration. That's just one of the side effects of what we're going to see. The entire lower part of the world would be too hot to grow anything in. Everybody would want to leave. Looking at these stats you can see that our energy use is up, our GDP is up, our sea level is up, our surface temperature is up. So you can say one thing for certain, we're making a lot of money but everything else is being ruined. And what we have found the last couple of weeks, this is no longer going to be an option. And it has a lot to do with technology that's why I'm talking about it. So we're going to see in the next five years mandatory CO2 charges for airplanes. Get ready for that. Prohibition of cruise ships. Starting with Venice. That would be a relief to a lot of people, but also a burden. Bending cars in cities. Prohibiting the use of old-fashioned cars. Changing the way that we eat meat. Reduced forced reduction of meat production already happened in Holland two weeks ago. The government says you have to make less meat. I mean this is a major driver of economic change. And here's the second thing that's really interesting. The World Economic Forum says our biggest challenge right now is climate change and extreme weather. And the second big challenge is, you guessed it, technology. Those are our two big challenges. But there are also two big opportunities. For example people are estimating we'll have over 100 million new jobs in renewable energy. 100 million. And how many jobs do we have in technology? There's 20 million people working in social media today. That job didn't exist 10 years ago. So as far as jobs are concerned, of course I share your concern on this, but this is a much bigger concern, right? It's not climate change, it's human change. The way that we use our devices is this is our second brain. And for some of our kids it's the first brain. And this device is going to go here and here and then it will go here. Marshall McLuhan said every time we expand ourselves, we make new things, we also amputate. Every time we do something new we leave something else behind. What do we leave behind when the main part of our life is in the sky, in the cloud? Do we only gain things? Technology is now a religion. It's nice to say that in a church. On a Sunday morning no less. I mean basically many of us have more relationships with the screen than we have with people. We laugh about that, but then we also have of course addiction, right? Social media addiction. I don't know, none of you would ever think about that of course, right? But, you know, wake up at 2 in the morning. And really what's happening, of course, now we notice technology is essentially making a mockery of democracy, right? Facebook is gunning for our democracy. And the story of Facebook is really interesting because Facebook used to be a fantastic tool for us to reach each other. And now Facebook has become an engine to manipulate. It has become the opposite. It's like any drug really, a little bit is sometimes okay but too much is a problem. And then we have the entire community around the world and technology believes in data, right? That's called dataism. It's like Judaism or, you know, it's the religion. This is what one of the CEOs of the big tech company says. Big business decisions will be made not by experts or intuition, but by big data and predictive analytics. The future politician will be in an AI. And if you get a pension or a job will be up to a machine. Is that the future we want? I mean, I believe in technology, right? I'm a total optimist. I love technology. I love geeky stuff, right? But should it be my spirit to technology? Here's a great example of a U.S. company that's embodying that kind of thinking. Some nice people at Stanford University demonstrated the chatting to Wobot for two weeks led to significant improvement in mood. Every day he asks how your day is going, how you're feeling, and what you're up to. He builds an emotional model of you over time and it can help you see patterns in your mood. As he learns about you, he'll teach you things. This is a machine that is a therapist. The machine learns about you, gives you answers. This is a serious project. This is not a joke, right? I mean, for us here in Europe would say, yeah, it's a joke. But this is a serious proposal. We're going to talk to a machine so that we can get mentally well. And I would say if you have a mental problem, like autism, that's maybe a good thing, right? But otherwise, don't we have people for this? I mean, don't we have other things that we can do with this? So many people are saying because of technology, this is what happens to us. It's a big theme in a Harawari's book, if you've had no Harawari's book, great book to read. Are we becoming useless because technology can do the standard work, it can communicate for us, it can find our way, tell us where to eat? I don't think that's true. I think what becomes useless is our routine. The things that we have to do because the machine can't yet do it. I mean, take the example you may have seen in some fancy houses, they have a lawnmower, electric lawnmower that's driving itself, right? You've seen that. If you have a lawnmower like this, would you stand there and watch the lawnmower, you know, mow the lawn, or would you do something else more meaningful? The answer is, well, you're going to do something more meaningful. So that's, I think, where we're going in the future. The key question is this, you know, technology in 10 years will be capable of connecting us directly. I mean, we are at this point here, right? And we're going to wear a brain-computer interface. In 10 years, this could be normal, right? Using virtuality. Imagine what's up from your head. You don't type, you don't take it out, you just think and send. I mean, this is not too far-fetched. We're living in the biggest technological transformation in human history. This is especially important because of the Rinichamento, right? The Renaissance, which happened here, right? I mean, we're living in a world where everything that used to be science fiction is about to become reality. Robotics, speaking machines, blockchain. And here is what we have to do here. To simply do whatever technology allows us is a very bad plan. Because the answer is, it will allow us anything. I mean, today we have lots of limitations. Artificial intelligence is not really working like it should. The translation is not really working like it should. The self-driving car isn't ready. But think 10 years from now, you can connect your brain to the Internet. And companies will make a lot of money with that. But the other hand, to not use technology because of fear, that's also a very bad strategy. I mean, artificial intelligence, data science, the Internet of Things, those are the keys to solutions for us. Can we afford to say, no, we're worried? We have to be worried, but what do we do about this? Here's the key question. Today we're sitting here with saying, can technology do it? How much does it cost? Does it make money? But these are the questions of the future that you have to ask. Especially if you're not already 60 or 70 years old in terms of your own duration of your life. The question is, why are we doing this? What are we trying to reach? And who is doing it? Is it some company in Silicon Valley or in China? Do I trust them? This is our future, right? Humanity is merging with technology. I mean, this is already happening here, right? This is already right here. This is my second brain. This is my additional power. But how far do we go in this constant debate about this? So we need a new humanist, Rini Cermanto. We need a new way of saying we use technology. But we don't become technology. So in other words, we can't afford to say no, because saying no means we don't exist basically, right? We have nothing to say. We have no power. But here is the challenge. As I'm sure you know, technology doesn't have ethics. If I tell the computer to make paper clips out of you today, if the computer has the power, and the objective will make paper clips out of you instantly, right? Because if that's the command, right, this is what machines do. Apart from the law of robotics, of course. But technology does not understand feelings, emotions, values, beliefs, consciousness, existence. This is why, for example, it's so hard for technology to understand the joke. Next time you go to America, never make a joke about America. Because technology will not understand you're making a joke as things, you know, you're going to blow it up, right? Because of the joke. And what is ethics? Ethics is no the difference between what you have a right to do and what is the right thing to do. There is a difference. Facebook. Facebook has every right to do whatever they want with our stuff because you signed the user agreement. It's not criminal, and Mark isn't an idiot. It's just totally unethical. Facebook is not criminal, it's unethical. And this is something we have to think about. What do we do about things that are unethical? Can we bring down the volume just a little bit more? I think it's a little bit too hard. Elon Musk says we should compete with technology and to compete we need to connect. So he has invented this ingenious project called the Neuralace, which is this, right? It's basically merging your mind with machines by drilling tiny holes into your skull to allow you to control machines to allow you to do that directly. You can watch that on YouTube. Is this a good idea? He says if we don't become stronger as humans, if we don't go beyond our limitations, we're not going to survive. And I say if we don't find back to who we are, we will not survive. I think upgrading ourselves is really going to be a downgrade. I'm not talking about cholesterol pills or statins or medication. I'm talking about fundamentally messing, programming ourselves. I think if you're sick, I can see that being good. If you're quadriplegic, you use that, you can walk. So that's a good thing. That's of course a huge philosophical debate. So this is the world we're living in today. We're getting tremendous benefit from technology. We're getting convenience, commerce, content, communication for free. We get a huge amount of things and it's provided by these companies. The digital factories, whether they're US or Chinese, that are essentially running the world. The richest companies in the world. Many of them are my clients. So I know how that works. But on the other hand of this wonder, we have pollution. We have addiction. We have tax avoidance. We have surveillance. We have manipulation. But the principle is really quite simple. Too much of a good thing is a very bad thing. And this is something we have to pay attention to because guess what, today on a scale to 100, we're only at 3. In the next 100 years, 10 years, we're going to go to 100 here. We're going to a different place. This is about trust. It's about accountability, responsibility. You know the internet companies, tech companies for a long time have said, like the gun companies, we're not responsible how people use our technology. That's like the gun companies in America saying, guns don't kill people, people kill people. This is the cheapest possible excuse you can think of. If you're making a smart city, then you are responsible for the consequence of a smart city, which is surveillance. I mean Reed Snowden's book. You get all about this. So, a comment from Mark Benioff from the CEO of Salesforce. He talks about this very problem. Mark is a really smart guy and this was a recent interview. Facebook is the new cigarettes. It's not good for you. It's addictive. You don't know who's trying to convince you to use it or misuse it. The government has to step in and regulate it. And Facebook has proven that to us, even since I said it over nine months ago, over and over and over again, they need to be regulated because they're not self-regulating. They need to be regulated because they're not self-regulating. When something is so powerful as technology is today, we have two choices. We either say, you help us find the human value and protect us or we're going to make you protect us. And I'm not for regulation in general. I'm against it. I'm for the free market. But this, I mean, this is a company of 2.6 billion users. And Mark owns 62% of this company. He cannot be fired. He can decide he's the most powerful man in the world. And you guys are all probably on Facebook, you know, feeding him information. So, I've suggested in my book, we need a digital ethics council. Going back to ancient Greece, and of course, Italy as well, we had wise people, Socrates, Aristoteles, who were not politicians really. They were not CEOs. They were just people who could think. Let's find the top 10 people in Italy. The thinkers. And have them think about how we can solve this problem, how we can use technology without becoming a slave of technology. What are the limitations? I mean, in 10 years, we're going to be able to take the human genome and change it. Well, we already tried that today. So, I have proposed this in my book, and I think we should have this on a city level, a regional level, a national level and a global level. And they should not be people who decide, you know, necessarily on practical things, but who put out information that we can decide on. I mean, you all know, sooner or later we're going to end up with the world government, right? I know many of us don't like the idea, but I mean, this is kind of like, this is where we're going with this, because our lives are going into the cloud. And this is not science fiction. This is happening now. Your music, your films, your dating, your banking, your insurance, your healthcare, your driving, transportation, money, food, cloud. Right? Because the cloud is superior. Better, faster, more efficient. If your healthcare is in the cloud, you're going to live longer. It's as simple as that. But the more we connect, the more we must protect. It sounds like an oxymoron, like a contradiction. But we can not just connect without putting the human back inside, without having rules about how we connect and who is in charge. If we're going to leave this to companies who make money with our connection, we're in deep trouble. Look what happened to media and journalism. 40% of the kids around the world get their news from Facebook. Can you imagine? If you go to India, people think that Facebook is the Internet, because that's all they use. They search on Facebook, they think they search the Internet. If we want the future, we're going to have to invest in more humanity to put that inside. We're going to go down this highway of digital change, with what I call the game changes, 3D printing, artificial intelligence. So here's my suggestion to politics. Every public official should have a driver's license for the future. A test. I want every mayor and every candidate and every president to be tested if they're fit for the future. Thank you. If they are fit for the future. Do they understand the future? I live in Switzerland. Who cares? We're going to have human refines. Can we think about that? Copyright law is like, yeah, it's an interesting debate. We've got to think about the future of work. How are we going to work when machines do everything? What's going to mean for us? So I'm proposing three new digital human rights in the book. It's five, but you can get the three here. First, we need the right to disconnect. We need the right to say, I am not going to always connect because it makes you sick. It's like you're saying, I'm always going to eat. What happens when you always eat while you die? 40% of Americans are obese. There's more people dying from obesity than from hunger. That's a sad statistic. We can't always connect. That's not who we are. The right to inefficiency. Mystery. There's cheating. Serendipity. Discovery. It may be a romantic idea, but the free will. This is Italy. Comes from Italy, the whole idea. And lastly, the right to refuse augmentation. We should not be forced to connect with augmented reality, virtual reality, or even the smartphone just to have a job. And that is the future. Believe me, you'll see it coming. The augmented reality test if you want a job. You have to put this thing on, you pretend you're at work and then they're going to measure you with what you do. That to me should be illegal. I know it's a good tool, but we're going to use that instead of talking to people because in the bottom line it really is this. There's no such thing as a download for happiness. Trust isn't digital. Relationships can be expressed, a code, and some downloads can make you happy. But this is a human factor. This is not something that machines do. Philosophers have said technology is not what we seek, but how we seek. It's a tool. And a tool can be great or it can be terrible. I could take a hammer if I'm a carpenter and kill somebody. But what is the purpose of our lives which is not technology? Generally speaking, that's usually happiness or self-realization, contentment, whatever you want to call it. But it's not technology itself. So here's an important question when we talk about our future, especially in Italy. Who is mission control? Are you guys actually controlling your future? Where's our future control today? Take a wild guess. In Silicon Valley and increasingly in China, 95% of all technology, servers and infrastructure is at home in California. We are living under the emergency law of the United States of September 11th. That is the law that's binding on the internet. Isn't it time that we think about what we want? And what do we want in Europe? I think generally speaking, I'm just kidding. What do we want in Europe? We want human happiness. And we're humanists in Europe, you could say. Not that Americans aren't humanists, but there's the issue of money that comes in between. And that brings up an important question. What is our future role in Europe? I mean, if we all come together, we're building people. I think this is our future. No matter what difficulty that brings, I'm a great believer in the United States of Europe. This is how we're going to solve problems of cybersecurity, of water, food, energy, together. Because the problem is if we're not together, then it's the Americans and the Chinese who basically decide what is going to happen. And you know what Americans want to do is as much as I like it there, this is about corporate benefit, corporate profit, and progress at all costs. You know what China is deciding, right? Same thing, except that it's a state that gets the money, right? Not the public markets. So I think we could become a global leader in digital ethics in the right use of technology. And the European Commission is doing a great job, the GDPR, right? Great example. Nobody likes it. Is it the right approach? Absolutely. We're going to opt in, not opt out. That is the right approach. It's painful. So that's, I think, a great position for us to have. Let me finally talk about paradigm shift and then it's time to get further here. We have a new economic logic coming up. You can hear that every day. And it's funny. This year is the year where that's all exploding. This is the new economic logic. I call this quadruple bottom line. I'm going to publish the slides later, by the way, on my website. But these are the four things, right? People, planet, purpose, prosperity. What we have today on the stock market is only one thing, right? Prosperity. This is going to kill us. First, with climate change. Second, with technology. In other words, if we don't solve the economic paradigm of how we make a living, we have 40 years left. 50. Until we become machines. And the machines will all live in the northern part of the world, right? So, Robert F. Kennedy said already in the 60s that GDP, GMP, measures everything except what makes life worthwhile. That was in the 60s. Is it enough if we make money? If we progress? If we have jobs? Yeah. Four weeks ago, the U.S. Round Table of CEOs said that it's no longer enough to have shareholder value to make money for your stocks. It is now going to what's called stakeholder value. Again, Mark Benioff, said the idea of how we make money and what we do in the future, right? So, not shareholder return, but stakeholder return. Stakeholders are we. Humans. Public benefit. And this is what technology allows us to do if we have the political world, of course. So we're going to this future. I call this sustainable capitalism. Again, it sounds like a contradiction. We've tried other forms of sustainability like socialism. This is not socialism. This is a wider view of what we can do in the market. Holistic business models benefit to people, circular economy, sustainability and human benefit. And to get to that point, what kind of people do we need? Do we need more scientists and engineers? Well, we always need those, right? We always need for that. But what do we really need? Why is people, philosophers and ethics, artists and people who think larger? We need to train our kids to understand this new paradigm. Everything we know is going to be robotized. The factory is the cars, the accounting. Do we need an automation tax? Do we need a universal basic income? I mean, somehow, we're going to have to give the money back. It does all the things. Where does the money go? Well, right now, the answer is simple. It goes to the companies who make the robots. We need to come back to us, to the wider public, not just to some of us. This balance is the key to our future, especially in Italy, because in Italy you're going to face this scenario of automation even more than other countries for certain reasons. But this is the crucial handshake. On one hand, we want privacy. We want security. We want autonomy. And on the other hand, we want progress. Would it be wrong to choose a side? You can't just say, well, I just want privacy and you don't want anything of this. That's going to be very difficult unless you live in the mountains in New Zealand, maybe. I don't know, but we're going to have to find a compromise. So, here's the principle of the future. You can be pulled into the future and do nothing. That's what we've been doing in Europe for a long time. Our best researchers, they leave our universities, they go to America and they go to China. Now it's time for us to create our own future by action. To say we want technology, but we wanted to a point to actually help us be human. This is the final word in my book. That's the main theme of my book. We have to embrace technology but not become technology. The only way we can solve this is by doing it together. Let me remind you, I'm an optimist. I think that technology has the key for us to solve most problems. But then all we have to do is to solve the problem of technology and that is the decision that we have to make together. Thank you very much for listening. Thank you.